From patchwork Tue Nov 17 13:22:22 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kyrylo Tkachov X-Patchwork-Id: 56791 Delivered-To: patch@linaro.org Received: by 10.112.155.196 with SMTP id vy4csp1930896lbb; Tue, 17 Nov 2015 05:22:45 -0800 (PST) X-Received: by 10.68.161.194 with SMTP id xu2mr63345580pbb.86.1447766565250; Tue, 17 Nov 2015 05:22:45 -0800 (PST) Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id sj4si3014471pac.228.2015.11.17.05.22.44 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 17 Nov 2015 05:22:45 -0800 (PST) Received-SPF: pass (google.com: domain of gcc-patches-return-414369-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; spf=pass (google.com: domain of gcc-patches-return-414369-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-414369-patch=linaro.org@gcc.gnu.org; dkim=pass header.i=@gcc.gnu.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :message-id:date:from:mime-version:to:cc:subject:references :in-reply-to:content-type; q=dns; s=default; b=FdRhkwtebmXwfRapQ ffkw12rCeQtXnqlOZbSo8tyn9V4l4SvBIVcKoRhBHcb0AE0tKLQhJm1wIGw6lDmE AY9hXGBCeqldNXK8A7KTNorUC6Junq45P79B+CkMF3I2JWEclaSWaCzbqzlUW+Ig dBEx+0khs98xPerZ8lEp8Qcids= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :message-id:date:from:mime-version:to:cc:subject:references :in-reply-to:content-type; s=default; bh=deNZjMIQhtn20J4pxrTF2Iy efT4=; b=VRdPQAHJiwUCEAo31EOBWxPF6Q2XfBXOsOvqfOGmXp1+p+qI0+MSknP h7UiypIQcY1l7Gh1RNifcTiVMczAI8YFxduYGwAPxVh62rZiUXiA/+A+Et6OAtqy B/HUGLAV/W1JluSRm7Zs1vrGET1t+EvU0qp3aP5e7Lhy8uG0C3Fg= Received: (qmail 33034 invoked by alias); 17 Nov 2015 13:22:30 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 33021 invoked by uid 89); 17 Nov 2015 13:22:29 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.7 required=5.0 tests=AWL, BAYES_00, SPF_PASS autolearn=ham version=3.3.2 X-HELO: eu-smtp-delivery-143.mimecast.com Received: from eu-smtp-delivery-143.mimecast.com (HELO eu-smtp-delivery-143.mimecast.com) (146.101.78.143) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 17 Nov 2015 13:22:27 +0000 Received: from cam-owa1.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.140]) by eu-smtp-1.mimecast.com with ESMTP id uk-mta-5-KR-8MOFDSWCpl7lNWMwDUw-1; Tue, 17 Nov 2015 13:22:22 +0000 Received: from [10.2.206.200] ([10.1.2.79]) by cam-owa1.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Tue, 17 Nov 2015 13:22:21 +0000 Message-ID: <564B2A0E.6000708@arm.com> Date: Tue, 17 Nov 2015 13:22:22 +0000 From: Kyrill Tkachov User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.2.0 MIME-Version: 1.0 To: Ramana Radhakrishnan , GCC Patches CC: Ramana Radhakrishnan , Richard Earnshaw Subject: Re: [PATCH][ARM] PR 68143 Properly update memory offsets when expanding setmem References: <563C84ED.4010603@arm.com> <564B1764.8000202@foss.arm.com> <564B2462.90704@arm.com> In-Reply-To: <564B2462.90704@arm.com> X-MC-Unique: KR-8MOFDSWCpl7lNWMwDUw-1 X-IsSubscribed: yes On 17/11/15 12:58, Kyrill Tkachov wrote: > Hi Ramana, > > On 17/11/15 12:02, Ramana Radhakrishnan wrote: >> >> On 06/11/15 10:46, Kyrill Tkachov wrote: >>> Hi all, >>> >>> In this wrong-code PR the vector setmem expansion and arm_block_set_aligned_vect in particular >>> use the wrong offset when calling adjust_automodify_address. In the attached testcase during the >>> initial zeroing out we get two V16QI stores, but they both are recorded by adjust_automodify_address >>> as modifying x+0 rather than x+0 and x+12 (the total size to be written is 28). >>> >>> This led to the scheduling pass moving the store from "x.g = 2;" to before the zeroing stores. >>> >>> This patch fixes the problem by keeping track of the offset to which stores are emitted and >>> passing it to adjust_automodify_address as appropriate. >>> >>> From inspection I see arm_block_set_unaligned_vect also has this issue so I performed the same >>> fix in that function as well. >>> >>> Bootstrapped and tested on arm-none-linux-gnueabihf. >>> >>> Ok for trunk? >>> >>> This bug appears on GCC 5 too and I'm currently testing this patch there. >>> Ok to backport to GCC 5 as well? >>> Thanks, >>> Kyrill >>> >>> 2015-11-06 Kyrylo Tkachov >>> >>> PR target/68143 >>> * config/arm/arm.c (arm_block_set_unaligned_vect): Keep track of >>> offset from dstbase and use it appropriately in >>> adjust_automodify_address. >>> (arm_block_set_aligned_vect): Likewise. >>> >>> 2015-11-06 Kyrylo Tkachov >>> >>> PR target/68143 >>> * gcc.target/arm/pr68143_1.c: New test. >> Sorry about the delay in reviewing this. There's nothing arm specific about this test - I'd just put this in gcc.c-torture/execute, there are enough auto-testers with neon on that will show up issues if this starts failing. > > Thanks, will do. I was on the fence about whether this should go in torture. > I'll put it there. > For the record, here's what I committed with r230462. 2015-11-17 Kyrylo Tkachov PR target/68143 * config/arm/arm.c (arm_block_set_unaligned_vect): Keep track of offset from dstbase and use it appropriately in adjust_automodify_address. (arm_block_set_aligned_vect): Likewise. 2015-11-17 Kyrylo Tkachov PR target/68143 * gcc.c-torture/execute/pr68143_1.c: New test. > Kyrill > >> >> Ok with that change. >> >> Ramana >> >>> arm-setmem-offset.patch >>> >>> >>> commit 78c6989a7af1df672ea227057180d79d717ed5f3 >>> Author: Kyrylo Tkachov >>> Date: Wed Oct 28 17:29:18 2015 +0000 >>> >>> [ARM] Properly update memory offsets when expanding setmem >>> >>> diff --git a/gcc/config/arm/arm.c b/gcc/config/arm/arm.c >>> index 66e8afc..adf3143 100644 >>> --- a/gcc/config/arm/arm.c >>> +++ b/gcc/config/arm/arm.c >>> @@ -29268,7 +29268,7 @@ arm_block_set_unaligned_vect (rtx dstbase, >>> rtx (*gen_func) (rtx, rtx); >>> machine_mode mode; >>> unsigned HOST_WIDE_INT v = value; >>> - >>> + unsigned int offset = 0; >>> gcc_assert ((align & 0x3) != 0); >>> nelt_v8 = GET_MODE_NUNITS (V8QImode); >>> nelt_v16 = GET_MODE_NUNITS (V16QImode); >>> @@ -29289,7 +29289,7 @@ arm_block_set_unaligned_vect (rtx dstbase, >>> return false; >>> dst = copy_addr_to_reg (XEXP (dstbase, 0)); >>> - mem = adjust_automodify_address (dstbase, mode, dst, 0); >>> + mem = adjust_automodify_address (dstbase, mode, dst, offset); >>> v = sext_hwi (v, BITS_PER_WORD); >>> val_elt = GEN_INT (v); >>> @@ -29306,7 +29306,11 @@ arm_block_set_unaligned_vect (rtx dstbase, >>> { >>> emit_insn ((*gen_func) (mem, reg)); >>> if (i + 2 * nelt_mode <= length) >>> - emit_insn (gen_add2_insn (dst, GEN_INT (nelt_mode))); >>> + { >>> + emit_insn (gen_add2_insn (dst, GEN_INT (nelt_mode))); >>> + offset += nelt_mode; >>> + mem = adjust_automodify_address (dstbase, mode, dst, offset); >>> + } >>> } >>> /* If there are not less than nelt_v8 bytes leftover, we must be in >>> @@ -29317,6 +29321,9 @@ arm_block_set_unaligned_vect (rtx dstbase, >>> if (i + nelt_v8 < length) >>> { >>> emit_insn (gen_add2_insn (dst, GEN_INT (length - i))); >>> + offset += length - i; >>> + mem = adjust_automodify_address (dstbase, mode, dst, offset); >>> + >>> /* We are shifting bytes back, set the alignment accordingly. */ >>> if ((length & 1) != 0 && align >= 2) >>> set_mem_align (mem, BITS_PER_UNIT); >>> @@ -29327,12 +29334,13 @@ arm_block_set_unaligned_vect (rtx dstbase, >>> else if (i < length && i + nelt_v8 >= length) >>> { >>> if (mode == V16QImode) >>> - { >>> - reg = gen_lowpart (V8QImode, reg); >>> - mem = adjust_automodify_address (dstbase, V8QImode, dst, 0); >>> - } >>> + reg = gen_lowpart (V8QImode, reg); >>> + >>> emit_insn (gen_add2_insn (dst, GEN_INT ((length - i) >>> + (nelt_mode - nelt_v8)))); >>> + offset += (length - i) + (nelt_mode - nelt_v8); >>> + mem = adjust_automodify_address (dstbase, V8QImode, dst, offset); >>> + >>> /* We are shifting bytes back, set the alignment accordingly. */ >>> if ((length & 1) != 0 && align >= 2) >>> set_mem_align (mem, BITS_PER_UNIT); >>> @@ -29359,6 +29367,7 @@ arm_block_set_aligned_vect (rtx dstbase, >>> rtx rval[MAX_VECT_LEN]; >>> machine_mode mode; >>> unsigned HOST_WIDE_INT v = value; >>> + unsigned int offset = 0; >>> gcc_assert ((align & 0x3) == 0); >>> nelt_v8 = GET_MODE_NUNITS (V8QImode); >>> @@ -29390,14 +29399,15 @@ arm_block_set_aligned_vect (rtx dstbase, >>> /* Handle first 16 bytes specially using vst1:v16qi instruction. */ >>> if (mode == V16QImode) >>> { >>> - mem = adjust_automodify_address (dstbase, mode, dst, 0); >>> + mem = adjust_automodify_address (dstbase, mode, dst, offset); >>> emit_insn (gen_movmisalignv16qi (mem, reg)); >>> i += nelt_mode; >>> /* Handle (8, 16) bytes leftover using vst1:v16qi again. */ >>> if (i + nelt_v8 < length && i + nelt_v16 > length) >>> { >>> emit_insn (gen_add2_insn (dst, GEN_INT (length - nelt_mode))); >>> - mem = adjust_automodify_address (dstbase, mode, dst, 0); >>> + offset += length - nelt_mode; >>> + mem = adjust_automodify_address (dstbase, mode, dst, offset); >>> /* We are shifting bytes back, set the alignment accordingly. */ >>> if ((length & 0x3) == 0) >>> set_mem_align (mem, BITS_PER_UNIT * 4); >>> @@ -29419,7 +29429,7 @@ arm_block_set_aligned_vect (rtx dstbase, >>> for (; (i + nelt_mode <= length); i += nelt_mode) >>> { >>> addr = plus_constant (Pmode, dst, i); >>> - mem = adjust_automodify_address (dstbase, mode, addr, i); >>> + mem = adjust_automodify_address (dstbase, mode, addr, offset + i); >>> emit_move_insn (mem, reg); >>> } >>> @@ -29428,8 +29438,8 @@ arm_block_set_aligned_vect (rtx dstbase, >>> if (i + UNITS_PER_WORD == length) >>> { >>> addr = plus_constant (Pmode, dst, i - UNITS_PER_WORD); >>> - mem = adjust_automodify_address (dstbase, mode, >>> - addr, i - UNITS_PER_WORD); >>> + offset += i - UNITS_PER_WORD; >>> + mem = adjust_automodify_address (dstbase, mode, addr, offset); >>> /* We are shifting 4 bytes back, set the alignment accordingly. */ >>> if (align > UNITS_PER_WORD) >>> set_mem_align (mem, BITS_PER_UNIT * UNITS_PER_WORD); >>> @@ -29441,7 +29451,8 @@ arm_block_set_aligned_vect (rtx dstbase, >>> else if (i < length) >>> { >>> emit_insn (gen_add2_insn (dst, GEN_INT (length - nelt_mode))); >>> - mem = adjust_automodify_address (dstbase, mode, dst, 0); >>> + offset += length - nelt_mode; >>> + mem = adjust_automodify_address (dstbase, mode, dst, offset); >>> /* We are shifting bytes back, set the alignment accordingly. */ >>> if ((length & 1) == 0) >>> set_mem_align (mem, BITS_PER_UNIT * 2); >>> diff --git a/gcc/testsuite/gcc.target/arm/pr68143_1.c b/gcc/testsuite/gcc.target/arm/pr68143_1.c >>> new file mode 100644 >>> index 0000000..323473f >>> --- /dev/null >>> +++ b/gcc/testsuite/gcc.target/arm/pr68143_1.c >>> @@ -0,0 +1,36 @@ >>> +/* { dg-do run } */ >>> +/* { dg-require-effective-target arm_neon_hw } */ >>> +/* { dg-options "-O3 -mcpu=cortex-a57" } */ >>> +/* { dg-add-options arm_neon } */ >>> + >>> +#define NULL 0 >>> + >>> +struct stuff >>> +{ >>> + int a; >>> + int b; >>> + int c; >>> + int d; >>> + int e; >>> + char *f; >>> + int g; >>> +}; >>> + >>> +void __attribute__ ((noinline)) >>> +bar (struct stuff *x) >>> +{ >>> + if (x->g != 2) >>> + __builtin_abort (); >>> +} >>> + >>> +int >>> +main (int argc, char** argv) >>> +{ >>> + struct stuff x = {0, 0, 0, 0, 0, NULL, 0}; >>> + x.a = 100; >>> + x.d = 100; >>> + x.g = 2; >>> + /* Struct should now look like {100, 0, 0, 100, 0, 0, 0, 2}. */ >>> + bar (&x); >>> + return 0; >>> +} >>> > commit 7f329a2f9c3efdb5e7a6483792fcfab945cc7a84 Author: Kyrylo Tkachov Date: Wed Oct 28 17:29:18 2015 +0000 [ARM] Properly update memory offsets when expanding setmem diff --git a/gcc/config/arm/arm.c b/gcc/config/arm/arm.c index d0fe028..8a92798 100644 --- a/gcc/config/arm/arm.c +++ b/gcc/config/arm/arm.c @@ -29171,7 +29171,7 @@ arm_block_set_unaligned_vect (rtx dstbase, rtx (*gen_func) (rtx, rtx); machine_mode mode; unsigned HOST_WIDE_INT v = value; - + unsigned int offset = 0; gcc_assert ((align & 0x3) != 0); nelt_v8 = GET_MODE_NUNITS (V8QImode); nelt_v16 = GET_MODE_NUNITS (V16QImode); @@ -29192,7 +29192,7 @@ arm_block_set_unaligned_vect (rtx dstbase, return false; dst = copy_addr_to_reg (XEXP (dstbase, 0)); - mem = adjust_automodify_address (dstbase, mode, dst, 0); + mem = adjust_automodify_address (dstbase, mode, dst, offset); v = sext_hwi (v, BITS_PER_WORD); val_elt = GEN_INT (v); @@ -29209,7 +29209,11 @@ arm_block_set_unaligned_vect (rtx dstbase, { emit_insn ((*gen_func) (mem, reg)); if (i + 2 * nelt_mode <= length) - emit_insn (gen_add2_insn (dst, GEN_INT (nelt_mode))); + { + emit_insn (gen_add2_insn (dst, GEN_INT (nelt_mode))); + offset += nelt_mode; + mem = adjust_automodify_address (dstbase, mode, dst, offset); + } } /* If there are not less than nelt_v8 bytes leftover, we must be in @@ -29220,6 +29224,9 @@ arm_block_set_unaligned_vect (rtx dstbase, if (i + nelt_v8 < length) { emit_insn (gen_add2_insn (dst, GEN_INT (length - i))); + offset += length - i; + mem = adjust_automodify_address (dstbase, mode, dst, offset); + /* We are shifting bytes back, set the alignment accordingly. */ if ((length & 1) != 0 && align >= 2) set_mem_align (mem, BITS_PER_UNIT); @@ -29230,12 +29237,13 @@ arm_block_set_unaligned_vect (rtx dstbase, else if (i < length && i + nelt_v8 >= length) { if (mode == V16QImode) - { - reg = gen_lowpart (V8QImode, reg); - mem = adjust_automodify_address (dstbase, V8QImode, dst, 0); - } + reg = gen_lowpart (V8QImode, reg); + emit_insn (gen_add2_insn (dst, GEN_INT ((length - i) + (nelt_mode - nelt_v8)))); + offset += (length - i) + (nelt_mode - nelt_v8); + mem = adjust_automodify_address (dstbase, V8QImode, dst, offset); + /* We are shifting bytes back, set the alignment accordingly. */ if ((length & 1) != 0 && align >= 2) set_mem_align (mem, BITS_PER_UNIT); @@ -29262,6 +29270,7 @@ arm_block_set_aligned_vect (rtx dstbase, rtx rval[MAX_VECT_LEN]; machine_mode mode; unsigned HOST_WIDE_INT v = value; + unsigned int offset = 0; gcc_assert ((align & 0x3) == 0); nelt_v8 = GET_MODE_NUNITS (V8QImode); @@ -29293,14 +29302,15 @@ arm_block_set_aligned_vect (rtx dstbase, /* Handle first 16 bytes specially using vst1:v16qi instruction. */ if (mode == V16QImode) { - mem = adjust_automodify_address (dstbase, mode, dst, 0); + mem = adjust_automodify_address (dstbase, mode, dst, offset); emit_insn (gen_movmisalignv16qi (mem, reg)); i += nelt_mode; /* Handle (8, 16) bytes leftover using vst1:v16qi again. */ if (i + nelt_v8 < length && i + nelt_v16 > length) { emit_insn (gen_add2_insn (dst, GEN_INT (length - nelt_mode))); - mem = adjust_automodify_address (dstbase, mode, dst, 0); + offset += length - nelt_mode; + mem = adjust_automodify_address (dstbase, mode, dst, offset); /* We are shifting bytes back, set the alignment accordingly. */ if ((length & 0x3) == 0) set_mem_align (mem, BITS_PER_UNIT * 4); @@ -29322,7 +29332,7 @@ arm_block_set_aligned_vect (rtx dstbase, for (; (i + nelt_mode <= length); i += nelt_mode) { addr = plus_constant (Pmode, dst, i); - mem = adjust_automodify_address (dstbase, mode, addr, i); + mem = adjust_automodify_address (dstbase, mode, addr, offset + i); emit_move_insn (mem, reg); } @@ -29331,8 +29341,8 @@ arm_block_set_aligned_vect (rtx dstbase, if (i + UNITS_PER_WORD == length) { addr = plus_constant (Pmode, dst, i - UNITS_PER_WORD); - mem = adjust_automodify_address (dstbase, mode, - addr, i - UNITS_PER_WORD); + offset += i - UNITS_PER_WORD; + mem = adjust_automodify_address (dstbase, mode, addr, offset); /* We are shifting 4 bytes back, set the alignment accordingly. */ if (align > UNITS_PER_WORD) set_mem_align (mem, BITS_PER_UNIT * UNITS_PER_WORD); @@ -29344,7 +29354,8 @@ arm_block_set_aligned_vect (rtx dstbase, else if (i < length) { emit_insn (gen_add2_insn (dst, GEN_INT (length - nelt_mode))); - mem = adjust_automodify_address (dstbase, mode, dst, 0); + offset += length - nelt_mode; + mem = adjust_automodify_address (dstbase, mode, dst, offset); /* We are shifting bytes back, set the alignment accordingly. */ if ((length & 1) == 0) set_mem_align (mem, BITS_PER_UNIT * 2); diff --git a/gcc/testsuite/gcc.c-torture/execute/pr68143_1.c b/gcc/testsuite/gcc.c-torture/execute/pr68143_1.c new file mode 100644 index 0000000..cbfbbc2 --- /dev/null +++ b/gcc/testsuite/gcc.c-torture/execute/pr68143_1.c @@ -0,0 +1,31 @@ +#define NULL 0 + +struct stuff +{ + int a; + int b; + int c; + int d; + int e; + char *f; + int g; +}; + +void __attribute__ ((noinline)) +bar (struct stuff *x) +{ + if (x->g != 2) + __builtin_abort (); +} + +int +main (int argc, char** argv) +{ + struct stuff x = {0, 0, 0, 0, 0, NULL, 0}; + x.a = 100; + x.d = 100; + x.g = 2; + /* Struct should now look like {100, 0, 0, 100, 0, 0, 0, 2}. */ + bar (&x); + return 0; +}