From patchwork Thu Jun 5 22:04:38 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Lyon X-Patchwork-Id: 31456 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qg0-f72.google.com (mail-qg0-f72.google.com [209.85.192.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 5C5EA203AC for ; Thu, 5 Jun 2014 22:09:33 +0000 (UTC) Received: by mail-qg0-f72.google.com with SMTP id q108sf4633193qgd.7 for ; Thu, 05 Jun 2014 15:09:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:mailing-list :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:sender:delivered-to:from:to:subject:date:message-id :in-reply-to:references:x-original-sender :x-original-authentication-results; bh=rQjfOrGgjqm6q2SUyeyVctPopFU7qKMKoCujhp0AwNs=; b=cP9N1DYmVFs3gX6gZZaT+Cxrur4CICfiDik1hsMN+5tynXbmURV29dnc76i8EXHWBU cq2CRgl/LPcyTirHdy9kPyyUvvQQEEQWszbf4Iq7DBwpR4dJW7dTKCKgK6uyB/qmLJOH 5wsfhzmhbnCPFzCTf8308TPGOI60katuJsZJvugi0SuUNYos/yRFAxmzHybNBUmryxy1 IjVVNdGl+fJLNoiZ0HqUpr6yk3vExEgNUWPQxDJPE++4tqvTzPiv1Z7IfLuAUyn2ajx+ Ivct7rd4ELMvfFxDtPj8naNoQZ6HH+mcrX7heqlNm3cCZhjIBsBXGHKzbfDTPRMBQ5Vu bZ9g== X-Gm-Message-State: ALoCoQlqKTI4jL8CD9AxShSdzduSxAqtCTCK08Q3RI2E4AeWoSdg7CpVcL+s2vkyanQxtrbSo3/j X-Received: by 10.236.140.42 with SMTP id d30mr27972yhj.2.1402006173163; Thu, 05 Jun 2014 15:09:33 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.97.200 with SMTP id m66ls395915qge.83.gmail; Thu, 05 Jun 2014 15:09:33 -0700 (PDT) X-Received: by 10.52.85.165 with SMTP id i5mr437986vdz.53.1402006173047; Thu, 05 Jun 2014 15:09:33 -0700 (PDT) Received: from mail-ve0-x232.google.com (mail-ve0-x232.google.com [2607:f8b0:400c:c01::232]) by mx.google.com with ESMTPS id o10si4652313vdo.29.2014.06.05.15.09.32 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 05 Jun 2014 15:09:33 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2607:f8b0:400c:c01::232 as permitted sender) client-ip=2607:f8b0:400c:c01::232; Received: by mail-ve0-f178.google.com with SMTP id sa20so2039082veb.23 for ; Thu, 05 Jun 2014 15:09:32 -0700 (PDT) X-Received: by 10.220.12.66 with SMTP id w2mr602393vcw.15.1402006172931; Thu, 05 Jun 2014 15:09:32 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.54.6 with SMTP id vs6csp51156vcb; Thu, 5 Jun 2014 15:09:32 -0700 (PDT) X-Received: by 10.68.166.36 with SMTP id zd4mr1123592pbb.54.1402006171947; Thu, 05 Jun 2014 15:09:31 -0700 (PDT) Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id xj7si15497881pbc.33.2014.06.05.15.09.31 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Jun 2014 15:09:31 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-369630-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Received: (qmail 3619 invoked by alias); 5 Jun 2014 22:05:29 -0000 Mailing-List: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 3507 invoked by uid 89); 5 Jun 2014 22:05:28 -0000 X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 X-HELO: mail-we0-f180.google.com Received: from mail-we0-f180.google.com (HELO mail-we0-f180.google.com) (74.125.82.180) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-SHA encrypted) ESMTPS; Thu, 05 Jun 2014 22:05:23 +0000 Received: by mail-we0-f180.google.com with SMTP id q58so1873115wes.25 for ; Thu, 05 Jun 2014 15:05:19 -0700 (PDT) X-Received: by 10.194.22.100 with SMTP id c4mr475470wjf.89.1402005919901; Thu, 05 Jun 2014 15:05:19 -0700 (PDT) Received: from babel.clyon.hd.free.fr (vig38-2-82-225-222-175.fbx.proxad.net. [82.225.222.175]) by mx.google.com with ESMTPSA id di10sm9704211wjb.1.2014.06.05.15.05.18 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 05 Jun 2014 15:05:19 -0700 (PDT) From: Christophe Lyon To: gcc-patches@gcc.gnu.org Subject: [Patch ARM/testsuite 18/22] Add vld2/vld3/vld4 tests. Date: Fri, 6 Jun 2014 00:04:38 +0200 Message-Id: <1402005882-31597-19-git-send-email-christophe.lyon@linaro.org> In-Reply-To: <1402005882-31597-18-git-send-email-christophe.lyon@linaro.org> References: <1402005882-31597-1-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-2-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-3-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-4-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-5-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-6-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-7-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-8-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-9-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-10-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-11-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-12-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-13-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-14-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-15-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-16-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-17-git-send-email-christophe.lyon@linaro.org> <1402005882-31597-18-git-send-email-christophe.lyon@linaro.org> X-IsSubscribed: yes X-Original-Sender: christophe.lyon@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2607:f8b0:400c:c01::232 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=pass header.i=@gcc.gnu.org X-Google-Group-Id: 836684582541 diff --git a/gcc/testsuite/gcc.target/arm/neon-intrinsics/vldX.c b/gcc/testsuite/gcc.target/arm/neon-intrinsics/vldX.c new file mode 100644 index 0000000..f0156c1 --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/neon-intrinsics/vldX.c @@ -0,0 +1,812 @@ +#include +#include "arm-neon-ref.h" +#include "compute-ref-data.h" + +/* Expected results. */ + +/* vld2/chunk 0. */ +VECT_VAR_DECL(expected_vld2_0,int,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3, + 0xf4, 0xf5, 0xf6, 0xf7 }; +VECT_VAR_DECL(expected_vld2_0,int,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 }; +VECT_VAR_DECL(expected_vld2_0,int,32,2) [] = { 0xfffffff0, 0xfffffff1 }; +VECT_VAR_DECL(expected_vld2_0,int,64,1) [] = { 0xfffffffffffffff0 }; +VECT_VAR_DECL(expected_vld2_0,uint,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3, + 0xf4, 0xf5, 0xf6, 0xf7 }; +VECT_VAR_DECL(expected_vld2_0,uint,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 }; +VECT_VAR_DECL(expected_vld2_0,uint,32,2) [] = { 0xfffffff0, 0xfffffff1 }; +VECT_VAR_DECL(expected_vld2_0,uint,64,1) [] = { 0xfffffffffffffff0 }; +VECT_VAR_DECL(expected_vld2_0,poly,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3, + 0xf4, 0xf5, 0xf6, 0xf7 }; +VECT_VAR_DECL(expected_vld2_0,poly,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 }; +VECT_VAR_DECL(expected_vld2_0,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 }; +VECT_VAR_DECL(expected_vld2_0,int,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3, + 0xf4, 0xf5, 0xf6, 0xf7, + 0xf8, 0xf9, 0xfa, 0xfb, + 0xfc, 0xfd, 0xfe, 0xff }; +VECT_VAR_DECL(expected_vld2_0,int,16,8) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3, + 0xfff4, 0xfff5, 0xfff6, 0xfff7 }; +VECT_VAR_DECL(expected_vld2_0,int,32,4) [] = { 0xfffffff0, 0xfffffff1, + 0xfffffff2, 0xfffffff3 }; +VECT_VAR_DECL(expected_vld2_0,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld2_0,uint,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3, + 0xf4, 0xf5, 0xf6, 0xf7, + 0xf8, 0xf9, 0xfa, 0xfb, + 0xfc, 0xfd, 0xfe, 0xff }; +VECT_VAR_DECL(expected_vld2_0,uint,16,8) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3, + 0xfff4, 0xfff5, 0xfff6, 0xfff7 }; +VECT_VAR_DECL(expected_vld2_0,uint,32,4) [] = { 0xfffffff0, 0xfffffff1, + 0xfffffff2, 0xfffffff3 }; +VECT_VAR_DECL(expected_vld2_0,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld2_0,poly,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3, + 0xf4, 0xf5, 0xf6, 0xf7, + 0xf8, 0xf9, 0xfa, 0xfb, + 0xfc, 0xfd, 0xfe, 0xff }; +VECT_VAR_DECL(expected_vld2_0,poly,16,8) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3, + 0xfff4, 0xfff5, 0xfff6, 0xfff7 }; +VECT_VAR_DECL(expected_vld2_0,hfloat,32,4) [] = { 0xc1800000, 0xc1700000, + 0xc1600000, 0xc1500000 }; + +/* vld2/chunk 1. */ +VECT_VAR_DECL(expected_vld2_1,int,8,8) [] = { 0xf8, 0xf9, 0xfa, 0xfb, + 0xfc, 0xfd, 0xfe, 0xff }; +VECT_VAR_DECL(expected_vld2_1,int,16,4) [] = { 0xfff4, 0xfff5, 0xfff6, 0xfff7 }; +VECT_VAR_DECL(expected_vld2_1,int,32,2) [] = { 0xfffffff2, 0xfffffff3 }; +VECT_VAR_DECL(expected_vld2_1,int,64,1) [] = { 0xfffffffffffffff1 }; +VECT_VAR_DECL(expected_vld2_1,uint,8,8) [] = { 0xf8, 0xf9, 0xfa, 0xfb, + 0xfc, 0xfd, 0xfe, 0xff }; +VECT_VAR_DECL(expected_vld2_1,uint,16,4) [] = { 0xfff4, 0xfff5, 0xfff6, 0xfff7 }; +VECT_VAR_DECL(expected_vld2_1,uint,32,2) [] = { 0xfffffff2, 0xfffffff3 }; +VECT_VAR_DECL(expected_vld2_1,uint,64,1) [] = { 0xfffffffffffffff1 }; +VECT_VAR_DECL(expected_vld2_1,poly,8,8) [] = { 0xf8, 0xf9, 0xfa, 0xfb, + 0xfc, 0xfd, 0xfe, 0xff }; +VECT_VAR_DECL(expected_vld2_1,poly,16,4) [] = { 0xfff4, 0xfff5, 0xfff6, 0xfff7 }; +VECT_VAR_DECL(expected_vld2_1,hfloat,32,2) [] = { 0xc1600000, 0xc1500000 }; +VECT_VAR_DECL(expected_vld2_1,int,8,16) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7, + 0x8, 0x9, 0xa, 0xb, + 0xc, 0xd, 0xe, 0xf }; +VECT_VAR_DECL(expected_vld2_1,int,16,8) [] = { 0xfff8, 0xfff9, 0xfffa, 0xfffb, + 0xfffc, 0xfffd, 0xfffe, 0xffff }; +VECT_VAR_DECL(expected_vld2_1,int,32,4) [] = { 0xfffffff4, 0xfffffff5, + 0xfffffff6, 0xfffffff7 }; +VECT_VAR_DECL(expected_vld2_1,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld2_1,uint,8,16) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7, + 0x8, 0x9, 0xa, 0xb, + 0xc, 0xd, 0xe, 0xf }; +VECT_VAR_DECL(expected_vld2_1,uint,16,8) [] = { 0xfff8, 0xfff9, 0xfffa, 0xfffb, + 0xfffc, 0xfffd, 0xfffe, 0xffff }; +VECT_VAR_DECL(expected_vld2_1,uint,32,4) [] = { 0xfffffff4, 0xfffffff5, + 0xfffffff6, 0xfffffff7 }; +VECT_VAR_DECL(expected_vld2_1,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld2_1,poly,8,16) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7, + 0x8, 0x9, 0xa, 0xb, + 0xc, 0xd, 0xe, 0xf }; +VECT_VAR_DECL(expected_vld2_1,poly,16,8) [] = { 0xfff8, 0xfff9, 0xfffa, 0xfffb, + 0xfffc, 0xfffd, 0xfffe, 0xffff }; +VECT_VAR_DECL(expected_vld2_1,hfloat,32,4) [] = { 0xc1400000, 0xc1300000, + 0xc1200000, 0xc1100000 }; + +/* vld3/chunk 0. */ +VECT_VAR_DECL(expected_vld3_0,int,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3, + 0xf4, 0xf5, 0xf6, 0xf7 }; +VECT_VAR_DECL(expected_vld3_0,int,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 }; +VECT_VAR_DECL(expected_vld3_0,int,32,2) [] = { 0xfffffff0, 0xfffffff1 }; +VECT_VAR_DECL(expected_vld3_0,int,64,1) [] = { 0xfffffffffffffff0 }; +VECT_VAR_DECL(expected_vld3_0,uint,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3, + 0xf4, 0xf5, 0xf6, 0xf7 }; +VECT_VAR_DECL(expected_vld3_0,uint,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 }; +VECT_VAR_DECL(expected_vld3_0,uint,32,2) [] = { 0xfffffff0, 0xfffffff1 }; +VECT_VAR_DECL(expected_vld3_0,uint,64,1) [] = { 0xfffffffffffffff0 }; +VECT_VAR_DECL(expected_vld3_0,poly,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3, + 0xf4, 0xf5, 0xf6, 0xf7 }; +VECT_VAR_DECL(expected_vld3_0,poly,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 }; +VECT_VAR_DECL(expected_vld3_0,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 }; +VECT_VAR_DECL(expected_vld3_0,int,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3, + 0xf4, 0xf5, 0xf6, 0xf7, + 0xf8, 0xf9, 0xfa, 0xfb, + 0xfc, 0xfd, 0xfe, 0xff }; +VECT_VAR_DECL(expected_vld3_0,int,16,8) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3, + 0xfff4, 0xfff5, 0xfff6, 0xfff7 }; +VECT_VAR_DECL(expected_vld3_0,int,32,4) [] = { 0xfffffff0, 0xfffffff1, + 0xfffffff2, 0xfffffff3 }; +VECT_VAR_DECL(expected_vld3_0,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld3_0,uint,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3, + 0xf4, 0xf5, 0xf6, 0xf7, + 0xf8, 0xf9, 0xfa, 0xfb, + 0xfc, 0xfd, 0xfe, 0xff }; +VECT_VAR_DECL(expected_vld3_0,uint,16,8) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3, + 0xfff4, 0xfff5, 0xfff6, 0xfff7 }; +VECT_VAR_DECL(expected_vld3_0,uint,32,4) [] = { 0xfffffff0, 0xfffffff1, + 0xfffffff2, 0xfffffff3 }; +VECT_VAR_DECL(expected_vld3_0,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld3_0,poly,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3, + 0xf4, 0xf5, 0xf6, 0xf7, + 0xf8, 0xf9, 0xfa, 0xfb, + 0xfc, 0xfd, 0xfe, 0xff }; +VECT_VAR_DECL(expected_vld3_0,poly,16,8) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3, + 0xfff4, 0xfff5, 0xfff6, 0xfff7 }; +VECT_VAR_DECL(expected_vld3_0,hfloat,32,4) [] = { 0xc1800000, 0xc1700000, + 0xc1600000, 0xc1500000 }; + +/* vld3/chunk 1. */ +VECT_VAR_DECL(expected_vld3_1,int,8,8) [] = { 0xf8, 0xf9, 0xfa, 0xfb, + 0xfc, 0xfd, 0xfe, 0xff }; +VECT_VAR_DECL(expected_vld3_1,int,16,4) [] = { 0xfff4, 0xfff5, 0xfff6, 0xfff7 }; +VECT_VAR_DECL(expected_vld3_1,int,32,2) [] = { 0xfffffff2, 0xfffffff3 }; +VECT_VAR_DECL(expected_vld3_1,int,64,1) [] = { 0xfffffffffffffff1 }; +VECT_VAR_DECL(expected_vld3_1,uint,8,8) [] = { 0xf8, 0xf9, 0xfa, 0xfb, + 0xfc, 0xfd, 0xfe, 0xff }; +VECT_VAR_DECL(expected_vld3_1,uint,16,4) [] = { 0xfff4, 0xfff5, 0xfff6, 0xfff7 }; +VECT_VAR_DECL(expected_vld3_1,uint,32,2) [] = { 0xfffffff2, 0xfffffff3 }; +VECT_VAR_DECL(expected_vld3_1,uint,64,1) [] = { 0xfffffffffffffff1 }; +VECT_VAR_DECL(expected_vld3_1,poly,8,8) [] = { 0xf8, 0xf9, 0xfa, 0xfb, + 0xfc, 0xfd, 0xfe, 0xff }; +VECT_VAR_DECL(expected_vld3_1,poly,16,4) [] = { 0xfff4, 0xfff5, 0xfff6, 0xfff7 }; +VECT_VAR_DECL(expected_vld3_1,hfloat,32,2) [] = { 0xc1600000, 0xc1500000 }; +VECT_VAR_DECL(expected_vld3_1,int,8,16) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7, + 0x8, 0x9, 0xa, 0xb, + 0xc, 0xd, 0xe, 0xf }; +VECT_VAR_DECL(expected_vld3_1,int,16,8) [] = { 0xfff8, 0xfff9, 0xfffa, 0xfffb, + 0xfffc, 0xfffd, 0xfffe, 0xffff }; +VECT_VAR_DECL(expected_vld3_1,int,32,4) [] = { 0xfffffff4, 0xfffffff5, + 0xfffffff6, 0xfffffff7 }; +VECT_VAR_DECL(expected_vld3_1,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld3_1,uint,8,16) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7, + 0x8, 0x9, 0xa, 0xb, + 0xc, 0xd, 0xe, 0xf }; +VECT_VAR_DECL(expected_vld3_1,uint,16,8) [] = { 0xfff8, 0xfff9, 0xfffa, 0xfffb, + 0xfffc, 0xfffd, 0xfffe, 0xffff }; +VECT_VAR_DECL(expected_vld3_1,uint,32,4) [] = { 0xfffffff4, 0xfffffff5, + 0xfffffff6, 0xfffffff7 }; +VECT_VAR_DECL(expected_vld3_1,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld3_1,poly,8,16) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7, + 0x8, 0x9, 0xa, 0xb, + 0xc, 0xd, 0xe, 0xf }; +VECT_VAR_DECL(expected_vld3_1,poly,16,8) [] = { 0xfff8, 0xfff9, 0xfffa, 0xfffb, + 0xfffc, 0xfffd, 0xfffe, 0xffff }; +VECT_VAR_DECL(expected_vld3_1,hfloat,32,4) [] = { 0xc1400000, 0xc1300000, + 0xc1200000, 0xc1100000 }; + +/* vld3/chunk 2. */ +VECT_VAR_DECL(expected_vld3_2,int,8,8) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7 }; +VECT_VAR_DECL(expected_vld3_2,int,16,4) [] = { 0xfff8, 0xfff9, + 0xfffa, 0xfffb }; +VECT_VAR_DECL(expected_vld3_2,int,32,2) [] = { 0xfffffff4, 0xfffffff5 }; +VECT_VAR_DECL(expected_vld3_2,int,64,1) [] = { 0xfffffffffffffff2 }; +VECT_VAR_DECL(expected_vld3_2,uint,8,8) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7 }; +VECT_VAR_DECL(expected_vld3_2,uint,16,4) [] = { 0xfff8, 0xfff9, + 0xfffa, 0xfffb }; +VECT_VAR_DECL(expected_vld3_2,uint,32,2) [] = { 0xfffffff4, 0xfffffff5 }; +VECT_VAR_DECL(expected_vld3_2,uint,64,1) [] = { 0xfffffffffffffff2 }; +VECT_VAR_DECL(expected_vld3_2,poly,8,8) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7 }; +VECT_VAR_DECL(expected_vld3_2,poly,16,4) [] = { 0xfff8, 0xfff9, + 0xfffa, 0xfffb }; +VECT_VAR_DECL(expected_vld3_2,hfloat,32,2) [] = { 0xc1400000, 0xc1300000 }; +VECT_VAR_DECL(expected_vld3_2,int,8,16) [] = { 0x10, 0x11, 0x12, 0x13, + 0x14, 0x15, 0x16, 0x17, + 0x18, 0x19, 0x1a, 0x1b, + 0x1c, 0x1d, 0x1e, 0x1f }; +VECT_VAR_DECL(expected_vld3_2,int,16,8) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7 }; +VECT_VAR_DECL(expected_vld3_2,int,32,4) [] = { 0xfffffff8, 0xfffffff9, + 0xfffffffa, 0xfffffffb }; +VECT_VAR_DECL(expected_vld3_2,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld3_2,uint,8,16) [] = { 0x10, 0x11, 0x12, 0x13, + 0x14, 0x15, 0x16, 0x17, + 0x18, 0x19, 0x1a, 0x1b, + 0x1c, 0x1d, 0x1e, 0x1f }; +VECT_VAR_DECL(expected_vld3_2,uint,16,8) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7 }; +VECT_VAR_DECL(expected_vld3_2,uint,32,4) [] = { 0xfffffff8, 0xfffffff9, + 0xfffffffa, 0xfffffffb }; +VECT_VAR_DECL(expected_vld3_2,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld3_2,poly,8,16) [] = { 0x10, 0x11, 0x12, 0x13, + 0x14, 0x15, 0x16, 0x17, + 0x18, 0x19, 0x1a, 0x1b, + 0x1c, 0x1d, 0x1e, 0x1f }; +VECT_VAR_DECL(expected_vld3_2,poly,16,8) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7 }; +VECT_VAR_DECL(expected_vld3_2,hfloat,32,4) [] = { 0xc1000000, 0xc0e00000, + 0xc0c00000, 0xc0a00000 }; + +/* vld4/chunk 0. */ +VECT_VAR_DECL(expected_vld4_0,int,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3, + 0xf4, 0xf5, 0xf6, 0xf7 }; +VECT_VAR_DECL(expected_vld4_0,int,16,4) [] = { 0xfff0, 0xfff1, + 0xfff2, 0xfff3 }; +VECT_VAR_DECL(expected_vld4_0,int,32,2) [] = { 0xfffffff0, 0xfffffff1 }; +VECT_VAR_DECL(expected_vld4_0,int,64,1) [] = { 0xfffffffffffffff0 }; +VECT_VAR_DECL(expected_vld4_0,uint,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3, + 0xf4, 0xf5, 0xf6, 0xf7 }; +VECT_VAR_DECL(expected_vld4_0,uint,16,4) [] = { 0xfff0, 0xfff1, + 0xfff2, 0xfff3 }; +VECT_VAR_DECL(expected_vld4_0,uint,32,2) [] = { 0xfffffff0, 0xfffffff1 }; +VECT_VAR_DECL(expected_vld4_0,uint,64,1) [] = { 0xfffffffffffffff0 }; +VECT_VAR_DECL(expected_vld4_0,poly,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3, + 0xf4, 0xf5, 0xf6, 0xf7 }; +VECT_VAR_DECL(expected_vld4_0,poly,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 }; +VECT_VAR_DECL(expected_vld4_0,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 }; +VECT_VAR_DECL(expected_vld4_0,int,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3, + 0xf4, 0xf5, 0xf6, 0xf7, + 0xf8, 0xf9, 0xfa, 0xfb, + 0xfc, 0xfd, 0xfe, 0xff }; +VECT_VAR_DECL(expected_vld4_0,int,16,8) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3, + 0xfff4, 0xfff5, 0xfff6, 0xfff7 }; +VECT_VAR_DECL(expected_vld4_0,int,32,4) [] = { 0xfffffff0, 0xfffffff1, + 0xfffffff2, 0xfffffff3 }; +VECT_VAR_DECL(expected_vld4_0,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_0,uint,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3, + 0xf4, 0xf5, 0xf6, 0xf7, + 0xf8, 0xf9, 0xfa, 0xfb, + 0xfc, 0xfd, 0xfe, 0xff }; +VECT_VAR_DECL(expected_vld4_0,uint,16,8) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3, + 0xfff4, 0xfff5, 0xfff6, 0xfff7 }; +VECT_VAR_DECL(expected_vld4_0,uint,32,4) [] = { 0xfffffff0, 0xfffffff1, + 0xfffffff2, 0xfffffff3 }; +VECT_VAR_DECL(expected_vld4_0,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_0,poly,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3, + 0xf4, 0xf5, 0xf6, 0xf7, + 0xf8, 0xf9, 0xfa, 0xfb, + 0xfc, 0xfd, 0xfe, 0xff }; +VECT_VAR_DECL(expected_vld4_0,poly,16,8) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3, + 0xfff4, 0xfff5, 0xfff6, 0xfff7 }; +VECT_VAR_DECL(expected_vld4_0,hfloat,32,4) [] = { 0xc1800000, 0xc1700000, + 0xc1600000, 0xc1500000 }; + +/* vld4/chunk 1. */ +VECT_VAR_DECL(expected_vld4_1,int,8,8) [] = { 0xf8, 0xf9, 0xfa, 0xfb, + 0xfc, 0xfd, 0xfe, 0xff }; +VECT_VAR_DECL(expected_vld4_1,int,16,4) [] = { 0xfff4, 0xfff5, 0xfff6, 0xfff7 }; +VECT_VAR_DECL(expected_vld4_1,int,32,2) [] = { 0xfffffff2, 0xfffffff3 }; +VECT_VAR_DECL(expected_vld4_1,int,64,1) [] = { 0xfffffffffffffff1 }; +VECT_VAR_DECL(expected_vld4_1,uint,8,8) [] = { 0xf8, 0xf9, 0xfa, 0xfb, + 0xfc, 0xfd, 0xfe, 0xff }; +VECT_VAR_DECL(expected_vld4_1,uint,16,4) [] = { 0xfff4, 0xfff5, 0xfff6, 0xfff7 }; +VECT_VAR_DECL(expected_vld4_1,uint,32,2) [] = { 0xfffffff2, 0xfffffff3 }; +VECT_VAR_DECL(expected_vld4_1,uint,64,1) [] = { 0xfffffffffffffff1 }; +VECT_VAR_DECL(expected_vld4_1,poly,8,8) [] = { 0xf8, 0xf9, 0xfa, 0xfb, + 0xfc, 0xfd, 0xfe, 0xff }; +VECT_VAR_DECL(expected_vld4_1,poly,16,4) [] = { 0xfff4, 0xfff5, 0xfff6, 0xfff7 }; +VECT_VAR_DECL(expected_vld4_1,hfloat,32,2) [] = { 0xc1600000, 0xc1500000 }; +VECT_VAR_DECL(expected_vld4_1,int,8,16) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7, + 0x8, 0x9, 0xa, 0xb, + 0xc, 0xd, 0xe, 0xf }; +VECT_VAR_DECL(expected_vld4_1,int,16,8) [] = { 0xfff8, 0xfff9, 0xfffa, 0xfffb, + 0xfffc, 0xfffd, 0xfffe, 0xffff }; +VECT_VAR_DECL(expected_vld4_1,int,32,4) [] = { 0xfffffff4, 0xfffffff5, + 0xfffffff6, 0xfffffff7 }; +VECT_VAR_DECL(expected_vld4_1,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_1,uint,8,16) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7, + 0x8, 0x9, 0xa, 0xb, + 0xc, 0xd, 0xe, 0xf }; +VECT_VAR_DECL(expected_vld4_1,uint,16,8) [] = { 0xfff8, 0xfff9, 0xfffa, 0xfffb, + 0xfffc, 0xfffd, 0xfffe, 0xffff }; +VECT_VAR_DECL(expected_vld4_1,uint,32,4) [] = { 0xfffffff4, 0xfffffff5, + 0xfffffff6, 0xfffffff7 }; +VECT_VAR_DECL(expected_vld4_1,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_1,poly,8,16) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7, + 0x8, 0x9, 0xa, 0xb, + 0xc, 0xd, 0xe, 0xf }; +VECT_VAR_DECL(expected_vld4_1,poly,16,8) [] = { 0xfff8, 0xfff9, 0xfffa, 0xfffb, + 0xfffc, 0xfffd, 0xfffe, 0xffff }; +VECT_VAR_DECL(expected_vld4_1,hfloat,32,4) [] = { 0xc1400000, 0xc1300000, + 0xc1200000, 0xc1100000 }; + +/* vld4/chunk 2. */ +VECT_VAR_DECL(expected_vld4_2,int,8,8) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7 }; +VECT_VAR_DECL(expected_vld4_2,int,16,4) [] = { 0xfff8, 0xfff9, 0xfffa, 0xfffb }; +VECT_VAR_DECL(expected_vld4_2,int,32,2) [] = { 0xfffffff4, 0xfffffff5 }; +VECT_VAR_DECL(expected_vld4_2,int,64,1) [] = { 0xfffffffffffffff2 }; +VECT_VAR_DECL(expected_vld4_2,uint,8,8) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7 }; +VECT_VAR_DECL(expected_vld4_2,uint,16,4) [] = { 0xfff8, 0xfff9, 0xfffa, 0xfffb }; +VECT_VAR_DECL(expected_vld4_2,uint,32,2) [] = { 0xfffffff4, 0xfffffff5 }; +VECT_VAR_DECL(expected_vld4_2,uint,64,1) [] = { 0xfffffffffffffff2 }; +VECT_VAR_DECL(expected_vld4_2,poly,8,8) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7 }; +VECT_VAR_DECL(expected_vld4_2,poly,16,4) [] = { 0xfff8, 0xfff9, 0xfffa, 0xfffb }; +VECT_VAR_DECL(expected_vld4_2,hfloat,32,2) [] = { 0xc1400000, 0xc1300000 }; +VECT_VAR_DECL(expected_vld4_2,int,8,16) [] = { 0x10, 0x11, 0x12, 0x13, + 0x14, 0x15, 0x16, 0x17, + 0x18, 0x19, 0x1a, 0x1b, + 0x1c, 0x1d, 0x1e, 0x1f }; +VECT_VAR_DECL(expected_vld4_2,int,16,8) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7 }; +VECT_VAR_DECL(expected_vld4_2,int,32,4) [] = { 0xfffffff8, 0xfffffff9, + 0xfffffffa, 0xfffffffb }; +VECT_VAR_DECL(expected_vld4_2,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_2,uint,8,16) [] = { 0x10, 0x11, 0x12, 0x13, + 0x14, 0x15, 0x16, 0x17, + 0x18, 0x19, 0x1a, 0x1b, + 0x1c, 0x1d, 0x1e, 0x1f }; +VECT_VAR_DECL(expected_vld4_2,uint,16,8) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7 }; +VECT_VAR_DECL(expected_vld4_2,uint,32,4) [] = { 0xfffffff8, 0xfffffff9, + 0xfffffffa, 0xfffffffb }; +VECT_VAR_DECL(expected_vld4_2,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_2,poly,8,16) [] = { 0x10, 0x11, 0x12, 0x13, + 0x14, 0x15, 0x16, 0x17, + 0x18, 0x19, 0x1a, 0x1b, + 0x1c, 0x1d, 0x1e, 0x1f }; +VECT_VAR_DECL(expected_vld4_2,poly,16,8) [] = { 0x0, 0x1, 0x2, 0x3, + 0x4, 0x5, 0x6, 0x7 }; +VECT_VAR_DECL(expected_vld4_2,hfloat,32,4) [] = { 0xc1000000, 0xc0e00000, + 0xc0c00000, 0xc0a00000 }; + +/* vld4/chunk 3. */ +VECT_VAR_DECL(expected_vld4_3,int,8,8) [] = { 0x8, 0x9, 0xa, 0xb, + 0xc, 0xd, 0xe, 0xf }; +VECT_VAR_DECL(expected_vld4_3,int,16,4) [] = { 0xfffc, 0xfffd, 0xfffe, 0xffff }; +VECT_VAR_DECL(expected_vld4_3,int,32,2) [] = { 0xfffffff6, 0xfffffff7 }; +VECT_VAR_DECL(expected_vld4_3,int,64,1) [] = { 0xfffffffffffffff3 }; +VECT_VAR_DECL(expected_vld4_3,uint,8,8) [] = { 0x8, 0x9, 0xa, 0xb, + 0xc, 0xd, 0xe, 0xf }; +VECT_VAR_DECL(expected_vld4_3,uint,16,4) [] = { 0xfffc, 0xfffd, 0xfffe, 0xffff }; +VECT_VAR_DECL(expected_vld4_3,uint,32,2) [] = { 0xfffffff6, 0xfffffff7 }; +VECT_VAR_DECL(expected_vld4_3,uint,64,1) [] = { 0xfffffffffffffff3 }; +VECT_VAR_DECL(expected_vld4_3,poly,8,8) [] = { 0x8, 0x9, 0xa, 0xb, + 0xc, 0xd, 0xe, 0xf }; +VECT_VAR_DECL(expected_vld4_3,poly,16,4) [] = { 0xfffc, 0xfffd, 0xfffe, 0xffff }; +VECT_VAR_DECL(expected_vld4_3,hfloat,32,2) [] = { 0xc1200000, 0xc1100000 }; +VECT_VAR_DECL(expected_vld4_3,int,8,16) [] = { 0x20, 0x21, 0x22, 0x23, + 0x24, 0x25, 0x26, 0x27, + 0x28, 0x29, 0x2a, 0x2b, + 0x2c, 0x2d, 0x2e, 0x2f }; +VECT_VAR_DECL(expected_vld4_3,int,16,8) [] = { 0x8, 0x9, 0xa, 0xb, + 0xc, 0xd, 0xe, 0xf }; +VECT_VAR_DECL(expected_vld4_3,int,32,4) [] = { 0xfffffffc, 0xfffffffd, + 0xfffffffe, 0xffffffff }; +VECT_VAR_DECL(expected_vld4_3,int,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_3,uint,8,16) [] = { 0x20, 0x21, 0x22, 0x23, + 0x24, 0x25, 0x26, 0x27, + 0x28, 0x29, 0x2a, 0x2b, + 0x2c, 0x2d, 0x2e, 0x2f }; +VECT_VAR_DECL(expected_vld4_3,uint,16,8) [] = { 0x8, 0x9, 0xa, 0xb, + 0xc, 0xd, 0xe, 0xf }; +VECT_VAR_DECL(expected_vld4_3,uint,32,4) [] = { 0xfffffffc, 0xfffffffd, + 0xfffffffe, 0xffffffff }; +VECT_VAR_DECL(expected_vld4_3,uint,64,2) [] = { 0x3333333333333333, + 0x3333333333333333 }; +VECT_VAR_DECL(expected_vld4_3,poly,8,16) [] = { 0x20, 0x21, 0x22, 0x23, + 0x24, 0x25, 0x26, 0x27, + 0x28, 0x29, 0x2a, 0x2b, + 0x2c, 0x2d, 0x2e, 0x2f }; +VECT_VAR_DECL(expected_vld4_3,poly,16,8) [] = { 0x8, 0x9, 0xa, 0xb, + 0xc, 0xd, 0xe, 0xf }; +VECT_VAR_DECL(expected_vld4_3,hfloat,32,4) [] = { 0xc0800000, 0xc0400000, + 0xc0000000, 0xbf800000 }; + +void exec_vldX (void) +{ + /* In this case, input variables are arrays of vectors. */ +#define DECL_VLDX(T1, W, N, X) \ + VECT_ARRAY_TYPE(T1, W, N, X) VECT_ARRAY_VAR(vector, T1, W, N, X); \ + VECT_VAR_DECL(result_bis_##X, T1, W, N)[X * N] + + /* We need to use a temporary result buffer (result_bis), because + the one used for other tests is not large enough. A subset of the + result data is moved from result_bis to result, and it is this + subset which is used to check the actual behaviour. The next + macro enables to move another chunk of data from result_bis to + result. */ +#define TEST_VLDX(Q, T1, T2, W, N, X) \ + VECT_ARRAY_VAR(vector, T1, W, N, X) = \ + /* Use dedicated init buffer, of size X */ \ + vld##X##Q##_##T2##W(VECT_ARRAY_VAR(buffer_vld##X, T1, W, N, X)); \ + vst##X##Q##_##T2##W(VECT_VAR(result_bis_##X, T1, W, N), \ + VECT_ARRAY_VAR(vector, T1, W, N, X)); \ + memcpy(VECT_VAR(result, T1, W, N), VECT_VAR(result_bis_##X, T1, W, N), \ + sizeof(VECT_VAR(result, T1, W, N))); + + /* Overwrite "result" with the contents of "result_bis"[Y]. */ +#define TEST_EXTRA_CHUNK(T1, W, N, X,Y) \ + memcpy(VECT_VAR(result, T1, W, N), \ + &(VECT_VAR(result_bis_##X, T1, W, N)[Y*N]), \ + sizeof(VECT_VAR(result, T1, W, N))); + + /* We need all variants in 64 bits, but there is no 64x2 variant. */ +#define DECL_ALL_VLDX(X) \ + DECL_VLDX(int, 8, 8, X); \ + DECL_VLDX(int, 16, 4, X); \ + DECL_VLDX(int, 32, 2, X); \ + DECL_VLDX(int, 64, 1, X); \ + DECL_VLDX(uint, 8, 8, X); \ + DECL_VLDX(uint, 16, 4, X); \ + DECL_VLDX(uint, 32, 2, X); \ + DECL_VLDX(uint, 64, 1, X); \ + DECL_VLDX(poly, 8, 8, X); \ + DECL_VLDX(poly, 16, 4, X); \ + DECL_VLDX(float, 32, 2, X); \ + DECL_VLDX(int, 8, 16, X); \ + DECL_VLDX(int, 16, 8, X); \ + DECL_VLDX(int, 32, 4, X); \ + DECL_VLDX(uint, 8, 16, X); \ + DECL_VLDX(uint, 16, 8, X); \ + DECL_VLDX(uint, 32, 4, X); \ + DECL_VLDX(poly, 8, 16, X); \ + DECL_VLDX(poly, 16, 8, X); \ + DECL_VLDX(float, 32, 4, X) + +#if __ARM_NEON_FP16_INTRINSICS +#define DECL_ALL_VLDX_FP16(X) \ + DECL_VLDX(float, 16, 4, X); \ + DECL_VLDX(float, 16, 8, X) +#endif + +#define TEST_ALL_VLDX(X) \ + TEST_VLDX(, int, s, 8, 8, X); \ + TEST_VLDX(, int, s, 16, 4, X); \ + TEST_VLDX(, int, s, 32, 2, X); \ + TEST_VLDX(, int, s, 64, 1, X); \ + TEST_VLDX(, uint, u, 8, 8, X); \ + TEST_VLDX(, uint, u, 16, 4, X); \ + TEST_VLDX(, uint, u, 32, 2, X); \ + TEST_VLDX(, uint, u, 64, 1, X); \ + TEST_VLDX(, poly, p, 8, 8, X); \ + TEST_VLDX(, poly, p, 16, 4, X); \ + TEST_VLDX(, float, f, 32, 2, X); \ + TEST_VLDX(q, int, s, 8, 16, X); \ + TEST_VLDX(q, int, s, 16, 8, X); \ + TEST_VLDX(q, int, s, 32, 4, X); \ + TEST_VLDX(q, uint, u, 8, 16, X); \ + TEST_VLDX(q, uint, u, 16, 8, X); \ + TEST_VLDX(q, uint, u, 32, 4, X); \ + TEST_VLDX(q, poly, p, 8, 16, X); \ + TEST_VLDX(q, poly, p, 16, 8, X); \ + TEST_VLDX(q, float, f, 32, 4, X) + +#if __ARM_NEON_FP16_INTRINSICS +#define TEST_ALL_VLDX_FP16(X) \ + TEST_VLDX(, float, f, 16, 4, X); \ + TEST_VLDX(q, float, f, 16, 8, X) +#endif + +#define TEST_ALL_EXTRA_CHUNKS(X, Y) \ + TEST_EXTRA_CHUNK(int, 8, 8, X, Y); \ + TEST_EXTRA_CHUNK(int, 16, 4, X, Y); \ + TEST_EXTRA_CHUNK(int, 32, 2, X, Y); \ + TEST_EXTRA_CHUNK(int, 64, 1, X, Y); \ + TEST_EXTRA_CHUNK(uint, 8, 8, X, Y); \ + TEST_EXTRA_CHUNK(uint, 16, 4, X, Y); \ + TEST_EXTRA_CHUNK(uint, 32, 2, X, Y); \ + TEST_EXTRA_CHUNK(uint, 64, 1, X, Y); \ + TEST_EXTRA_CHUNK(poly, 8, 8, X, Y); \ + TEST_EXTRA_CHUNK(poly, 16, 4, X, Y); \ + TEST_EXTRA_CHUNK(float, 32, 2, X, Y); \ + TEST_EXTRA_CHUNK(int, 8, 16, X, Y); \ + TEST_EXTRA_CHUNK(int, 16, 8, X, Y); \ + TEST_EXTRA_CHUNK(int, 32, 4, X, Y); \ + TEST_EXTRA_CHUNK(uint, 8, 16, X, Y); \ + TEST_EXTRA_CHUNK(uint, 16, 8, X, Y); \ + TEST_EXTRA_CHUNK(uint, 32, 4, X, Y); \ + TEST_EXTRA_CHUNK(poly, 8, 16, X, Y); \ + TEST_EXTRA_CHUNK(poly, 16, 8, X, Y); \ + TEST_EXTRA_CHUNK(float, 32, 4, X, Y) + +#if __ARM_NEON_FP16_INTRINSICS +#define TEST_ALL_EXTRA_CHUNKS_FP16(X, Y) \ + TEST_EXTRA_CHUNK(float, 16, 4, X, Y); \ + TEST_EXTRA_CHUNK(float, 16, 8, X, Y) +#endif + + DECL_ALL_VLDX(2); + DECL_ALL_VLDX(3); + DECL_ALL_VLDX(4); + +#if __ARM_NEON_FP16_INTRINSICS + DECL_ALL_VLDX_FP16(2); + DECL_ALL_VLDX_FP16(3); + DECL_ALL_VLDX_FP16(4); +#endif + + /* Special input buffers of suitable size are needed for vld2/vld3/vld4. */ + /* Input buffers for vld2, 1 of each size */ + VECT_ARRAY_INIT2(buffer_vld2, int, 8, 8); + PAD(buffer_vld2_pad, int, 8, 8); + VECT_ARRAY_INIT2(buffer_vld2, int, 16, 4); + PAD(buffer_vld2_pad, int, 16, 4); + VECT_ARRAY_INIT2(buffer_vld2, int, 32, 2); + PAD(buffer_vld2_pad, int, 32, 2); + VECT_ARRAY_INIT2(buffer_vld2, int, 64, 1); + PAD(buffer_vld2_pad, int, 64, 1); + VECT_ARRAY_INIT2(buffer_vld2, uint, 8, 8); + PAD(buffer_vld2_pad, uint, 8, 8); + VECT_ARRAY_INIT2(buffer_vld2, uint, 16, 4); + PAD(buffer_vld2_pad, uint, 16, 4); + VECT_ARRAY_INIT2(buffer_vld2, uint, 32, 2); + PAD(buffer_vld2_pad, uint, 32, 2); + VECT_ARRAY_INIT2(buffer_vld2, uint, 64, 1); + PAD(buffer_vld2_pad, uint, 64, 1); + VECT_ARRAY_INIT2(buffer_vld2, poly, 8, 8); + PAD(buffer_vld2_pad, poly, 8, 8); + VECT_ARRAY_INIT2(buffer_vld2, poly, 16, 4); + PAD(buffer_vld2_pad, poly, 16, 4); + VECT_ARRAY_INIT2(buffer_vld2, float, 32, 2); + PAD(buffer_vld2_pad, float, 32, 2); +#if __ARM_NEON_FP16_INTRINSICS + float16_t buffer_vld2_float16x4x2[4*2] = {0xcc00 /* -16 */, 0xcb80 /* -15 */, + 0xcb00 /* -14 */, 0xca80 /* -13 */, + 0xca00 /* -12 */, 0xc980 /* -11 */, + 0xc900 /* -10 */, 0xc880 /* -9 */}; + PAD(buffer_vld2_pad, float, 16, 4); +#endif + VECT_ARRAY_INIT2(buffer_vld2, int, 8, 16); + PAD(buffer_vld2_pad, int, 8, 16); + VECT_ARRAY_INIT2(buffer_vld2, int, 16, 8); + PAD(buffer_vld2_pad, int, 16, 8); + VECT_ARRAY_INIT2(buffer_vld2, int, 32, 4); + PAD(buffer_vld2_pad, int, 32, 4); + VECT_ARRAY_INIT2(buffer_vld2, int, 64, 2); + PAD(buffer_vld2_pad, int, 64, 2); + VECT_ARRAY_INIT2(buffer_vld2, uint, 8, 16); + PAD(buffer_vld2_pad, uint, 8, 16); + VECT_ARRAY_INIT2(buffer_vld2, uint, 16, 8); + PAD(buffer_vld2_pad, uint, 16, 8); + VECT_ARRAY_INIT2(buffer_vld2, uint, 32, 4); + PAD(buffer_vld2_pad, uint, 32, 4); + VECT_ARRAY_INIT2(buffer_vld2, uint, 64, 2); + PAD(buffer_vld2_pad, uint, 64, 2); + VECT_ARRAY_INIT2(buffer_vld2, poly, 8, 16); + PAD(buffer_vld2_pad, poly, 8, 16); + VECT_ARRAY_INIT2(buffer_vld2, poly, 16, 8); + PAD(buffer_vld2_pad, poly, 16, 8); + VECT_ARRAY_INIT2(buffer_vld2, float, 32, 4); + PAD(buffer_vld2_pad, float, 32, 4); +#if __ARM_NEON_FP16_INTRINSICS + float16_t buffer_vld2_float16x8x2[8*2] = {0xcc00 /* -16 */, 0xcb80 /* -15 */, + 0xcb00 /* -14 */, 0xca80 /* -13 */, + 0xca00 /* -12 */, 0xc980 /* -11 */, + 0xc900 /* -10 */, 0xc880 /* -9 */, + 0xc800 /* -8 */, 0xc700 /* -7 */, + 0xc600 /* -6 */, 0xc500 /* -5 */, + 0xc400 /* -4 */, 0xc200 /* -3 */, + 0xc000 /* -2 */, 0xbc00 /* -1 */}; + PAD(buffer_vld2_pad, float, 16, 8); +#endif + + /* Input buffers for vld3, 1 of each size */ + VECT_ARRAY_INIT3(buffer_vld3, int, 8, 8); + PAD(buffer_vld3_pad, int, 8, 8); + VECT_ARRAY_INIT3(buffer_vld3, int, 16, 4); + PAD(buffer_vld3_pad, int, 16, 4); + VECT_ARRAY_INIT3(buffer_vld3, int, 32, 2); + PAD(buffer_vld3_pad, int, 32, 2); + VECT_ARRAY_INIT3(buffer_vld3, int, 64, 1); + PAD(buffer_vld3_pad, int, 64, 1); + VECT_ARRAY_INIT3(buffer_vld3, uint, 8, 8); + PAD(buffer_vld3_pad, uint, 8, 8); + VECT_ARRAY_INIT3(buffer_vld3, uint, 16, 4); + PAD(buffer_vld3_pad, uint, 16, 4); + VECT_ARRAY_INIT3(buffer_vld3, uint, 32, 2); + PAD(buffer_vld3_pad, uint, 32, 2); + VECT_ARRAY_INIT3(buffer_vld3, uint, 64, 1); + PAD(buffer_vld3_pad, uint, 64, 1); + VECT_ARRAY_INIT3(buffer_vld3, poly, 8, 8); + PAD(buffer_vld3_pad, poly, 8, 8); + VECT_ARRAY_INIT3(buffer_vld3, poly, 16, 4); + PAD(buffer_vld3_pad, poly, 16, 4); + VECT_ARRAY_INIT3(buffer_vld3, float, 32, 2); + PAD(buffer_vld3_pad, float, 32, 2); +#if __ARM_NEON_FP16_INTRINSICS + float16_t buffer_vld3_float16x4x3[4*3] = {0xcc00 /* -16 */, 0xcb80 /* -15 */, + 0xcb00 /* -14 */, 0xca80 /* -13 */, + 0xca00 /* -12 */, 0xc980 /* -11 */, + 0xc900 /* -10 */, 0xc880 /* -9 */, + 0xc800 /* -8 */, 0xc700 /* -7 */, + 0xc600 /* -6 */, 0xc500 /* -5 */}; + PAD(buffer_vld3_pad, float, 16, 4); +#endif + VECT_ARRAY_INIT3(buffer_vld3, int, 8, 16); + PAD(buffer_vld3_pad, int, 8, 16); + VECT_ARRAY_INIT3(buffer_vld3, int, 16, 8); + PAD(buffer_vld3_pad, int, 16, 8); + VECT_ARRAY_INIT3(buffer_vld3, int, 32, 4); + PAD(buffer_vld3_pad, int, 32, 4); + VECT_ARRAY_INIT3(buffer_vld3, int, 64, 2); + PAD(buffer_vld3_pad, int, 64, 2); + VECT_ARRAY_INIT3(buffer_vld3, uint, 8, 16); + PAD(buffer_vld3_pad, uint, 8, 16); + VECT_ARRAY_INIT3(buffer_vld3, uint, 16, 8); + PAD(buffer_vld3_pad, uint, 16, 8); + VECT_ARRAY_INIT3(buffer_vld3, uint, 32, 4); + PAD(buffer_vld3_pad, uint, 32, 4); + VECT_ARRAY_INIT3(buffer_vld3, uint, 64, 2); + PAD(buffer_vld3_pad, uint, 64, 2); + VECT_ARRAY_INIT3(buffer_vld3, poly, 8, 16); + PAD(buffer_vld3_pad, poly, 8, 16); + VECT_ARRAY_INIT3(buffer_vld3, poly, 16, 8); + PAD(buffer_vld3_pad, poly, 16, 8); + VECT_ARRAY_INIT3(buffer_vld3, float, 32, 4); + PAD(buffer_vld3_pad, float, 32, 4); +#if __ARM_NEON_FP16_INTRINSICS + float16_t buffer_vld3_float16x8x3[8*3] = {0xcc00 /* -16 */, 0xcb80 /* -15 */, + 0xcb00 /* -14 */, 0xca80 /* -13 */, + 0xca00 /* -12 */, 0xc980 /* -11 */, + 0xc900 /* -10 */, 0xc880 /* -9 */, + 0xc800 /* -8 */, 0xc700 /* -7 */, + 0xc600 /* -6 */, 0xc500 /* -6 */, + 0xc400 /* -4 */, 0xc200 /* -3 */, + 0xc000 /* -2 */, 0xbc00 /* -1 */, + 0, 0x3c00 /* 1 */, + 0x4000 /* 2 */, 0x4200 /* 3 */, + 0x4400 /* 4 */, 0x4500 /* 5 */, + 0x4600 /* 6 */, 0x4700 /* 7 */}; + PAD(buffer_vld3_pad, float, 16, 8); +#endif + + /* Input buffers for vld4, 1 of each size */ + VECT_ARRAY_INIT4(buffer_vld4, int, 8, 8); + PAD(buffer_vld4_pad, int, 8, 8); + VECT_ARRAY_INIT4(buffer_vld4, int, 16, 4); + PAD(buffer_vld4_pad, int, 16, 4); + VECT_ARRAY_INIT4(buffer_vld4, int, 32, 2); + PAD(buffer_vld4_pad, int, 32, 2); + VECT_ARRAY_INIT4(buffer_vld4, int, 64, 1); + PAD(buffer_vld4_pad, int, 64, 1); + VECT_ARRAY_INIT4(buffer_vld4, uint, 8, 8); + PAD(buffer_vld4_pad, uint, 8, 8); + VECT_ARRAY_INIT4(buffer_vld4, uint, 16, 4); + PAD(buffer_vld4_pad, uint, 16, 4); + VECT_ARRAY_INIT4(buffer_vld4, uint, 32, 2); + PAD(buffer_vld4_pad, uint, 32, 2); + VECT_ARRAY_INIT4(buffer_vld4, uint, 64, 1); + PAD(buffer_vld4_pad, uint, 64, 1); + VECT_ARRAY_INIT4(buffer_vld4, poly, 8, 8); + PAD(buffer_vld4_pad, poly, 8, 8); + VECT_ARRAY_INIT4(buffer_vld4, poly, 16, 4); + PAD(buffer_vld4_pad, poly, 16, 4); + VECT_ARRAY_INIT4(buffer_vld4, float, 32, 2); + PAD(buffer_vld4_pad, float, 32, 2); +#if __ARM_NEON_FP16_INTRINSICS + float16_t buffer_vld4_float16x4x4[4*4] = {0xcc00 /* -16 */, 0xcb80 /* -15 */, + 0xcb00 /* -14 */, 0xca80 /* -13 */, + 0xca00 /* -12 */, 0xc980 /* -11 */, + 0xc900 /* -10 */, 0xc880 /* -9 */, + 0xc800 /* -8 */, 0xc700 /* -7 */, + 0xc600 /* -6 */, 0xc500 /* -5 */, + 0xc400 /* -4 */, 0xc200 /* -3 */, + 0xc000 /* -2 */, 0xbc00 /* -1 */}; + PAD(buffer_vld4_pad, float, 16, 4); +#endif + VECT_ARRAY_INIT4(buffer_vld4, int, 8, 16); + PAD(buffer_vld4_pad, int, 8, 16); + VECT_ARRAY_INIT4(buffer_vld4, int, 16, 8); + PAD(buffer_vld4_pad, int, 16, 8); + VECT_ARRAY_INIT4(buffer_vld4, int, 32, 4); + PAD(buffer_vld4_pad, int, 32, 4); + VECT_ARRAY_INIT4(buffer_vld4, int, 64, 2); + PAD(buffer_vld4_pad, int, 64, 2); + VECT_ARRAY_INIT4(buffer_vld4, uint, 8, 16); + PAD(buffer_vld4_pad, uint, 8, 16); + VECT_ARRAY_INIT4(buffer_vld4, uint, 16, 8); + PAD(buffer_vld4_pad, uint, 16, 8); + VECT_ARRAY_INIT4(buffer_vld4, uint, 32, 4); + PAD(buffer_vld4_pad, uint, 32, 4); + VECT_ARRAY_INIT4(buffer_vld4, uint, 64, 2); + PAD(buffer_vld4_pad, uint, 64, 2); + VECT_ARRAY_INIT4(buffer_vld4, poly, 8, 16); + PAD(buffer_vld4_pad, poly, 8, 16); + VECT_ARRAY_INIT4(buffer_vld4, poly, 16, 8); + PAD(buffer_vld4_pad, poly, 16, 8); + VECT_ARRAY_INIT4(buffer_vld4, float, 32, 4); + PAD(buffer_vld4_pad, float, 32, 4); +#if __ARM_NEON_FP16_INTRINSICS + float16_t buffer_vld4_float16x8x4[8*4] = {0xcc00 /* -16 */, 0xcb80 /* -15 */, + 0xcb00 /* -14 */, 0xca80 /* -13 */, + 0xca00 /* -12 */, 0xc980 /* -11 */, + 0xc900 /* -10 */, 0xc880 /* -9 */, + 0xc800 /* -8 */, 0xc700 /* -7 */, + 0xc600 /* -6 */, 0xc500 /* -6 */, + 0xc400 /* -4 */, 0xc200 /* -3 */, + 0xc000 /* -2 */, 0xbc00 /* -1 */, + 0, 0x3c00 /* 1 */, + 0x4000 /* 2 */, 0x4200 /* 3 */, + 0x4400 /* 4 */, 0x4500 /* 5 */, + 0x4600 /* 6 */, 0x4700 /* 7 */, + 0x4800 /* 8 */, 0x4880 /* 9 */, + 0x4900 /* 10 */, 0x4980 /* 11 */, + 0x4a00 /* 12 */, 0x4a80 /* 13 */, + 0x4b00 /* 14 */, 0x04b80 /* 15 */}; + PAD(buffer_vld4_pad, float, 16, 8); +#endif + + /* Check vld2/vld2q. */ + clean_results (); +#define TEST_MSG "VLD2/VLD2Q" + TEST_ALL_VLDX(2); +#if __ARM_NEON_FP16_INTRINSICS + TEST_ALL_VLDX_FP16(2); +#endif + CHECK_RESULTS_NAMED (TEST_MSG, expected_vld2_0, "chunk 0"); + + TEST_ALL_EXTRA_CHUNKS(2, 1); +#if __ARM_NEON_FP16_INTRINSICS + TEST_ALL_EXTRA_CHUNKS_FP16(2, 1); +#endif + CHECK_RESULTS_NAMED (TEST_MSG, expected_vld2_1, "chunk 1"); + + /* Check vld3/vld3q. */ + clean_results (); +#undef TEST_MSG +#define TEST_MSG "VLD3/VLD3Q" + TEST_ALL_VLDX(3); +#if __ARM_NEON_FP16_INTRINSICS + TEST_ALL_VLDX_FP16(3); +#endif + CHECK_RESULTS_NAMED (TEST_MSG, expected_vld3_0, "chunk 0"); + + TEST_ALL_EXTRA_CHUNKS(3, 1); +#if __ARM_NEON_FP16_INTRINSICS + TEST_ALL_EXTRA_CHUNKS_FP16(3, 1); +#endif + CHECK_RESULTS_NAMED (TEST_MSG, expected_vld3_1, "chunk 1"); + + TEST_ALL_EXTRA_CHUNKS(3, 2); +#if __ARM_NEON_FP16_INTRINSICS + TEST_ALL_EXTRA_CHUNKS_FP16(3, 2); +#endif + CHECK_RESULTS_NAMED (TEST_MSG, expected_vld3_2, "chunk 2"); + + /* Check vld4/vld4q. */ + clean_results (); +#undef TEST_MSG +#define TEST_MSG "VLD4/VLD4Q" + TEST_ALL_VLDX(4); +#if __ARM_NEON_FP16_INTRINSICS + TEST_ALL_VLDX_FP16(4); +#endif + CHECK_RESULTS_NAMED (TEST_MSG, expected_vld4_0, "chunk 0"); + + TEST_ALL_EXTRA_CHUNKS(4, 1); +#if __ARM_NEON_FP16_INTRINSICS + TEST_ALL_EXTRA_CHUNKS_FP16(4, 1); +#endif + CHECK_RESULTS_NAMED (TEST_MSG, expected_vld4_1, "chunk 1"); + + TEST_ALL_EXTRA_CHUNKS(4, 2); +#if __ARM_NEON_FP16_INTRINSICS + TEST_ALL_EXTRA_CHUNKS_FP16(4, 2); +#endif + CHECK_RESULTS_NAMED (TEST_MSG, expected_vld4_2, "chunk 2"); + + TEST_ALL_EXTRA_CHUNKS(4, 3); +#if __ARM_NEON_FP16_INTRINSICS + TEST_ALL_EXTRA_CHUNKS_FP16(4, 3); +#endif + CHECK_RESULTS_NAMED (TEST_MSG, expected_vld4_3, "chunk 3"); +} + +int main (void) +{ + exec_vldX (); + return 0; +}