diff mbox

[v2] ARM: Add Cortex-A15 optimized NEON and VFP memcpy routines, with IFUNC.

Message ID 516D18F0.4060009@linaro.org
State Superseded
Headers show

Commit Message

Will Newton April 16, 2013, 9:25 a.m. UTC
Add a high performance memcpy routine optimized for Cortex-A15 with
variants for use in the presence of NEON and VFP hardware selected
at runtime using indirect function support.

This was tested on armv7l-unknown-linux-gnueabihf. One new testsuite
failure was introduced (elf/ifuncmain5picstatic) which was caused by
a bug in ld. A fix for that ld issue has been submitted here:

  http://sourceware.org/ml/binutils/2013-04/msg00143.html

ports/ChangeLog.arm:

2013-04-15  Will Newton  <will.newton@linaro.org>

	* sysdeps/arm/armv7/multiarch/Makefile: New file.
	* sysdeps/arm/armv7/multiarch/ifunc-impl-list.c: Likewise.
	* sysdeps/arm/armv7/multiarch/memcpy.S: Likewise.
	* sysdeps/arm/armv7/multiarch/memcpy_impl.S: Likewise.
	* sysdeps/arm/armv7/multiarch/memcpy_neon.S: Likewise.
	* sysdeps/arm/armv7/multiarch/memcpy_vfp.S: Likewise.

Signed-off-by: Will Newton <will.newton@linaro.org>
---
 ports/sysdeps/arm/armv7/multiarch/Makefile         |   3 +
 .../sysdeps/arm/armv7/multiarch/ifunc-impl-list.c  |  46 ++
 ports/sysdeps/arm/armv7/multiarch/memcpy.S         |  64 +++
 ports/sysdeps/arm/armv7/multiarch/memcpy_impl.S    | 600 +++++++++++++++++++++
 ports/sysdeps/arm/armv7/multiarch/memcpy_neon.S    |   3 +
 ports/sysdeps/arm/armv7/multiarch/memcpy_vfp.S     |   3 +
 6 files changed, 719 insertions(+)
 create mode 100644 ports/sysdeps/arm/armv7/multiarch/Makefile
 create mode 100644 ports/sysdeps/arm/armv7/multiarch/ifunc-impl-list.c
 create mode 100644 ports/sysdeps/arm/armv7/multiarch/memcpy.S
 create mode 100644 ports/sysdeps/arm/armv7/multiarch/memcpy_impl.S
 create mode 100644 ports/sysdeps/arm/armv7/multiarch/memcpy_neon.S
 create mode 100644 ports/sysdeps/arm/armv7/multiarch/memcpy_vfp.S

Changes in v2:
 - rewrite indirect function to avoid GOT relocs

Comments

Richard Henderson April 17, 2013, 3:40 p.m. UTC | #1
On 2013-04-16 11:25, Will Newton wrote:
>   ports/sysdeps/arm/armv7/multiarch/Makefile         |   3 +

Does this really require v7?  From a brief read I didn't see anything in the 
_arm version that didn't work since v5te (ldrd and pld).  Any reason not to put 
this into armv6 instead?

> +ENTRY(memcpy)
> +	.type	memcpy, %gnu_indirect_function
> +	ldr	r1, .Lmemcpy_arm
> +	tst	r0, #HWCAP_ARM_NEON
> +	it	ne
> +	ldrne	r1, .Lmemcpy_neon
> +	bne	1f

Swap vfp and neon tests and you don't need the branch.

> +.Lreturn:

Unused label?

> +	ldr	tmp1, [src, #-60]	/* 15 words to go.  */
> +	str	tmp1, [dst, #-60]

These negative offsets mean thumb2 doesn't work.  That's fine, but it means 
that you need care for this in the _arm case.

You have two choices: either do the swapping to arm mode by hand in the impl 
file, or force the entire memcpy.o to arm mode by using #define NO_THUMB at the 
top, before the #include <sysdep.h>.

If you chose the later, then you don't have to worry about thumb2's restriction 
on rd=rn when rm=pc, and can avoid the extra move.  And the then unnecessary it 
markup.


r~
Will Newton April 17, 2013, 3:53 p.m. UTC | #2
On 17 April 2013 16:40, Richard Henderson <rth@twiddle.net> wrote:

Hi Richard,

Thanks for the review!

> On 2013-04-16 11:25, Will Newton wrote:
>>
>>   ports/sysdeps/arm/armv7/multiarch/Makefile         |   3 +
>
>
> Does this really require v7?  From a brief read I didn't see anything in the
> _arm version that didn't work since v5te (ldrd and pld).  Any reason not to
> put this into armv6 instead?

From reading the comments of the code v7 is required for NEON, v6 is
required for VFP and unaligned access is required. The unaligned
access requirement may be a problem on v5 I'm not sure. NB: I did not
write the memcpy code so I have not looked at it in great detail.

I also had trouble building an armv6 glibc. I only have armv7 systems
to test on and it doesn't seem possible to build for armv6 on an armv7
system as far as I can tell.

>> +ENTRY(memcpy)
>> +       .type   memcpy, %gnu_indirect_function
>> +       ldr     r1, .Lmemcpy_arm
>> +       tst     r0, #HWCAP_ARM_NEON
>> +       it      ne
>> +       ldrne   r1, .Lmemcpy_neon
>> +       bne     1f
>
>
> Swap vfp and neon tests and you don't need the branch.

True, I'll do that.

>> +.Lreturn:
>
>
> Unused label?

Yes, thanks, will fix.

>> +       ldr     tmp1, [src, #-60]       /* 15 words to go.  */
>> +       str     tmp1, [dst, #-60]
>
>
> These negative offsets mean thumb2 doesn't work.  That's fine, but it means
> that you need care for this in the _arm case.
>
> You have two choices: either do the swapping to arm mode by hand in the impl
> file, or force the entire memcpy.o to arm mode by using #define NO_THUMB at
> the top, before the #include <sysdep.h>.

It sounds like switching it all to arm mode is the best option, I'll do that.

--
Will Newton
Toolchain Working Group, Linaro
Carlos O'Donell April 17, 2013, 5:51 p.m. UTC | #3
On 04/16/2013 02:25 AM, Will Newton wrote:
> +size_t
> +__libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
> +			size_t max)
> +{
> +  size_t i = 0;
> +  int hwcap;
> +
> +  hwcap = GLRO(dl_hwcap);
> +
> +  IFUNC_IMPL (i, name, memcpy,
> +	      IFUNC_IMPL_ADD (array, i, memcpy, hwcap & HWCAP_ARM_NEON,
> +			      __memcpy_neon)
> +	      IFUNC_IMPL_ADD (array, i, memcpy, hwcap & HWCAP_ARM_VFPv3,
> +			      __memcpy_vfp)
> +	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_arm));
> +
> +  return i;
> +}

If unaligned access is a requirement why not check for it here?

While unaligned access is virtually free on ARMv6 or higher it
is still possible for the U-bit to be disabled and for this to
crash on such systems.

Cheers,
Carlos.
Richard Henderson April 18, 2013, 7:42 a.m. UTC | #4
On 2013-04-17 17:53, Will Newton wrote:
>>From reading the comments of the code v7 is required for NEON, v6 is
> required for VFP and unaligned access is required. The unaligned
> access requirement may be a problem on v5 I'm not sure. NB: I did not
> write the memcpy code so I have not looked at it in great detail.

Neon won't exist before v7, true.  But then it ought not be advertized
by the hwcap either, so it won't be selected.

Anyone got a raspberry pi to benchmark the current and the new here?


r~
Siddhesh Poyarekar April 18, 2013, 7:47 a.m. UTC | #5
On 18 April 2013 13:12, Richard Henderson <rth@twiddle.net> wrote:
> On 2013-04-17 17:53, Will Newton wrote:
>>>
>>> From reading the comments of the code v7 is required for NEON, v6 is
>>
>> required for VFP and unaligned access is required. The unaligned
>> access requirement may be a problem on v5 I'm not sure. NB: I did not
>> write the memcpy code so I have not looked at it in great detail.
>
>
> Neon won't exist before v7, true.  But then it ought not be advertized
> by the hwcap either, so it won't be selected.
>
> Anyone got a raspberry pi to benchmark the current and the new here?

I can do the benchmark on my raspberry pi.

Siddhesh
--
http://siddhesh.in
Will Newton April 18, 2013, 7:54 a.m. UTC | #6
On 18 April 2013 08:47, Siddhesh Poyarekar <siddhesh.poyarekar@gmail.com> wrote:
> On 18 April 2013 13:12, Richard Henderson <rth@twiddle.net> wrote:
>> On 2013-04-17 17:53, Will Newton wrote:
>>>>
>>>> From reading the comments of the code v7 is required for NEON, v6 is
>>>
>>> required for VFP and unaligned access is required. The unaligned
>>> access requirement may be a problem on v5 I'm not sure. NB: I did not
>>> write the memcpy code so I have not looked at it in great detail.
>>
>>
>> Neon won't exist before v7, true.  But then it ought not be advertized
>> by the hwcap either, so it won't be selected.
>>
>> Anyone got a raspberry pi to benchmark the current and the new here?
>
> I can do the benchmark on my raspberry pi.

Hi Siddesh,

Thanks for the offer, I don't have any armv6 hardware to test with.
The easiest way to test is probably using the cortex-strings package
(as the patch I posted is for armv7).

https://launchpad.net/cortex-strings

"try-this -t memcpy" will benchmark the code I submitted, "try-glibc
-t memcpy" will benchmark the current glibc code.

--
Will Newton
Toolchain Working Group, Linaro
Will Newton April 18, 2013, 8:01 a.m. UTC | #7
On 17 April 2013 18:51, Carlos O'Donell <carlos@redhat.com> wrote:
> On 04/16/2013 02:25 AM, Will Newton wrote:
>> +size_t
>> +__libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
>> +                     size_t max)
>> +{
>> +  size_t i = 0;
>> +  int hwcap;
>> +
>> +  hwcap = GLRO(dl_hwcap);
>> +
>> +  IFUNC_IMPL (i, name, memcpy,
>> +           IFUNC_IMPL_ADD (array, i, memcpy, hwcap & HWCAP_ARM_NEON,
>> +                           __memcpy_neon)
>> +           IFUNC_IMPL_ADD (array, i, memcpy, hwcap & HWCAP_ARM_VFPv3,
>> +                           __memcpy_vfp)
>> +           IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_arm));
>> +
>> +  return i;
>> +}
>
> If unaligned access is a requirement why not check for it here?
>
> While unaligned access is virtually free on ARMv6 or higher it
> is still possible for the U-bit to be disabled and for this to
> crash on such systems.

Does glibc support building for this type of system? The compiler will
emit unaligned accesses by default on armv6 and above.

Also there isn't a HWCAP bit for unaligned as far as I can tell.

--
Will Newton
Toolchain Working Group, Linaro
Siddhesh Poyarekar April 18, 2013, 8:25 a.m. UTC | #8
On 18 April 2013 13:24, Will Newton <will.newton@linaro.org> wrote:
> Thanks for the offer, I don't have any armv6 hardware to test with.
> The easiest way to test is probably using the cortex-strings package
> (as the patch I posted is for armv7).
>
> https://launchpad.net/cortex-strings
>
> "try-this -t memcpy" will benchmark the code I submitted, "try-glibc
> -t memcpy" will benchmark the current glibc code.

OK, I'll try to do this tonight (i.e. ~8 hours from now) since the
board is at home and I'm not.  I assume you want 'current glibc code'
to be master?  The distribution version is 2.17 IIRC or at worst 2.16.

Siddhesh
--
http://siddhesh.in
Will Newton April 18, 2013, 8:37 a.m. UTC | #9
On 18 April 2013 09:25, Siddhesh Poyarekar <siddhesh.poyarekar@gmail.com> wrote:
> On 18 April 2013 13:24, Will Newton <will.newton@linaro.org> wrote:
>> Thanks for the offer, I don't have any armv6 hardware to test with.
>> The easiest way to test is probably using the cortex-strings package
>> (as the patch I posted is for armv7).
>>
>> https://launchpad.net/cortex-strings
>>
>> "try-this -t memcpy" will benchmark the code I submitted, "try-glibc
>> -t memcpy" will benchmark the current glibc code.
>
> OK, I'll try to do this tonight (i.e. ~8 hours from now) since the
> board is at home and I'm not.  I assume you want 'current glibc code'
> to be master?  The distribution version is 2.17 IIRC or at worst 2.16.

The cortex-strings package has its own copy of the glibc memcpy in the
reference sub-directory. I believe "try-none" will benchmark against
the C library installed.

--
Will Newton
Toolchain Working Group, Linaro
Siddhesh Poyarekar April 18, 2013, 5:58 p.m. UTC | #10
On 18 April 2013 13:55, Siddhesh Poyarekar <siddhesh.poyarekar@gmail.com> wrote:
> On 18 April 2013 13:24, Will Newton <will.newton@linaro.org> wrote:
>> Thanks for the offer, I don't have any armv6 hardware to test with.
>> The easiest way to test is probably using the cortex-strings package
>> (as the patch I posted is for armv7).
>>
>> https://launchpad.net/cortex-strings
>>
>> "try-this -t memcpy" will benchmark the code I submitted, "try-glibc
>> -t memcpy" will benchmark the current glibc code.
>
> OK, I'll try to do this tonight (i.e. ~8 hours from now) since the
> board is at home and I'm not.  I assume you want 'current glibc code'
> to be master?  The distribution version is 2.17 IIRC or at worst 2.16.

`try-this` crashes with a SIGILL:

Program received signal SIGILL, Illegal instruction.
memset () at src/linaro-a9/memset.S:64
64              cbz     r2, 10f         @ Exit if 0 length
(gdb) bt

I don't know much arm assembly, but digging around a bit I found that
cbz is implemented on armv6T2 or later and hence not available on my
board.  If you're interested in making this work on armv6 then I can
give this a shot - I've been meaning to start playing with arm
assembly anyway.

Siddhesh
--
http://siddhesh.in
Joseph Myers April 19, 2013, 9:47 p.m. UTC | #11
On Tue, 16 Apr 2013, Will Newton wrote:

> Add a high performance memcpy routine optimized for Cortex-A15 with
> variants for use in the presence of NEON and VFP hardware selected
> at runtime using indirect function support.

The functions __aeabi_memcpy, __aeabi_memcpy4 and __aeabi_memcpy8, 
currently implemented to call memcpy, have their ABI defined to clobber 
only the core registers permitted to be clobbered by AAPCS, and not the 
normally call-clobbered VFP/NEON registers.

This patch would cause those functions to start clobbering some VFP/NEON 
registers.  So you need to do something to avoid that, whether making the 
__aeabi_* functions save and restore registers in the affected case, 
making the new functions do so or some other approach such as making 
__aeabi_* use a variant of the code with an extra save/restore.

As I understand the code, memcpy within ld.so itself will always be a 
version using the core registers only, so you shouldn't have the extra 
issue of needing to avoid corrupting such registers when used for argument 
passing in the VFP ABI variant.  Though if you were to support building a 
glibc version that requires VFP/NEON, where the new code is used 
unconditionally rather than just through IFUNC - and such a glibc is a 
perfectly reasonable thing to build, after all if you are building for the 
VFP ABI then you may as well assume at least VFP to be present everywhere 
- then you would need to deal with that issue.  (Cf. 
<http://sourceware.org/ml/libc-ports/2012-04/msg00087.html>.)
Will Newton April 22, 2013, 8:27 a.m. UTC | #12
On 18 April 2013 18:58, Siddhesh Poyarekar <siddhesh.poyarekar@gmail.com> wrote:

Hi Siddesh,

> On 18 April 2013 13:55, Siddhesh Poyarekar <siddhesh.poyarekar@gmail.com> wrote:
>> On 18 April 2013 13:24, Will Newton <will.newton@linaro.org> wrote:
>>> Thanks for the offer, I don't have any armv6 hardware to test with.
>>> The easiest way to test is probably using the cortex-strings package
>>> (as the patch I posted is for armv7).
>>>
>>> https://launchpad.net/cortex-strings
>>>
>>> "try-this -t memcpy" will benchmark the code I submitted, "try-glibc
>>> -t memcpy" will benchmark the current glibc code.
>>
>> OK, I'll try to do this tonight (i.e. ~8 hours from now) since the
>> board is at home and I'm not.  I assume you want 'current glibc code'
>> to be master?  The distribution version is 2.17 IIRC or at worst 2.16.
>
> `try-this` crashes with a SIGILL:
>
> Program received signal SIGILL, Illegal instruction.
> memset () at src/linaro-a9/memset.S:64
> 64              cbz     r2, 10f         @ Exit if 0 length
> (gdb) bt
>
> I don't know much arm assembly, but digging around a bit I found that
> cbz is implemented on armv6T2 or later and hence not available on my
> board.  If you're interested in making this work on armv6 then I can
> give this a shot - I've been meaning to start playing with arm
> assembly anyway.

Thanks for trying this.

All the routines in cortex-strings are intended for armv7 (i.e.
Cortex-A cores) and it looks like the memset code is being run as a
side effect of the test. It might be quite simple to work around or it
could involve lots of work! I guess you could always drop in a known
armv6 compatible memset e.g. the glibc one.

--
Will Newton
Toolchain Working Group, Linaro
Will Newton April 22, 2013, 8:32 a.m. UTC | #13
On 19 April 2013 22:47, Joseph S. Myers <joseph@codesourcery.com> wrote:

Hi Joseph,

> On Tue, 16 Apr 2013, Will Newton wrote:
>
>> Add a high performance memcpy routine optimized for Cortex-A15 with
>> variants for use in the presence of NEON and VFP hardware selected
>> at runtime using indirect function support.
>
> The functions __aeabi_memcpy, __aeabi_memcpy4 and __aeabi_memcpy8,
> currently implemented to call memcpy, have their ABI defined to clobber
> only the core registers permitted to be clobbered by AAPCS, and not the
> normally call-clobbered VFP/NEON registers.
>
> This patch would cause those functions to start clobbering some VFP/NEON
> registers.  So you need to do something to avoid that, whether making the
> __aeabi_* functions save and restore registers in the affected case,
> making the new functions do so or some other approach such as making
> __aeabi_* use a variant of the code with an extra save/restore.
>
> As I understand the code, memcpy within ld.so itself will always be a
> version using the core registers only, so you shouldn't have the extra
> issue of needing to avoid corrupting such registers when used for argument
> passing in the VFP ABI variant.  Though if you were to support building a
> glibc version that requires VFP/NEON, where the new code is used
> unconditionally rather than just through IFUNC - and such a glibc is a
> perfectly reasonable thing to build, after all if you are building for the
> VFP ABI then you may as well assume at least VFP to be present everywhere
> - then you would need to deal with that issue.  (Cf.
> <http://sourceware.org/ml/libc-ports/2012-04/msg00087.html>.)

I suspect adding in extra saving/restoring would be a significant
performance overhead, particularly for small copies. Would it make
sense just to make __aeabi_memcpy call the fallback arm routine? That
would mean no performance improvement for __aeabi_memcpy calls but no
performance degradation for the explicit memcpy case.


--
Will Newton
Toolchain Working Group, Linaro
diff mbox

Patch

diff --git a/ports/sysdeps/arm/armv7/multiarch/Makefile b/ports/sysdeps/arm/armv7/multiarch/Makefile
new file mode 100644
index 0000000..e834cc9
--- /dev/null
+++ b/ports/sysdeps/arm/armv7/multiarch/Makefile
@@ -0,0 +1,3 @@ 
+ifeq ($(subdir),string)
+sysdep_routines += memcpy_neon memcpy_vfp
+endif
diff --git a/ports/sysdeps/arm/armv7/multiarch/ifunc-impl-list.c b/ports/sysdeps/arm/armv7/multiarch/ifunc-impl-list.c
new file mode 100644
index 0000000..176288b
--- /dev/null
+++ b/ports/sysdeps/arm/armv7/multiarch/ifunc-impl-list.c
@@ -0,0 +1,46 @@ 
+/* Enumerate available IFUNC implementations of a function.  arm version.
+   Copyright (C) 2013 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#include <assert.h>
+#include <string.h>
+#include <wchar.h>
+#include <ldsodefs.h>
+#include <sysdep.h>
+#include <ifunc-impl-list.h>
+
+/* Fill ARRAY of MAX elements with IFUNC implementations for function
+   NAME and return the number of valid entries.  */
+
+size_t
+__libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
+			size_t max)
+{
+  size_t i = 0;
+  int hwcap;
+
+  hwcap = GLRO(dl_hwcap);
+
+  IFUNC_IMPL (i, name, memcpy,
+	      IFUNC_IMPL_ADD (array, i, memcpy, hwcap & HWCAP_ARM_NEON,
+			      __memcpy_neon)
+	      IFUNC_IMPL_ADD (array, i, memcpy, hwcap & HWCAP_ARM_VFPv3,
+			      __memcpy_vfp)
+	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_arm));
+
+  return i;
+}
diff --git a/ports/sysdeps/arm/armv7/multiarch/memcpy.S b/ports/sysdeps/arm/armv7/multiarch/memcpy.S
new file mode 100644
index 0000000..1c78b53
--- /dev/null
+++ b/ports/sysdeps/arm/armv7/multiarch/memcpy.S
@@ -0,0 +1,64 @@ 
+/* Multiple versions of memcpy
+   All versions must be listed in ifunc-impl-list.c.
+   Copyright (C) 2013 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#include <sysdep.h>
+#include <rtld-global-offsets.h>
+#include <arm-features.h>
+
+#if !defined NOT_IN_libc
+	.text
+ENTRY(memcpy)
+	.type	memcpy, %gnu_indirect_function
+	ldr	r1, .Lmemcpy_arm
+	tst	r0, #HWCAP_ARM_NEON
+	it	ne
+	ldrne	r1, .Lmemcpy_neon
+	bne	1f
+	tst	r0, #HWCAP_ARM_VFP
+	it	ne
+	ldrne	r1, .Lmemcpy_vfp
+1:
+	add	r1, pc
+	mov	r0, r1
+.Lreturn:
+	DO_RET(lr)
+
+.Lmemcpy_arm:
+	.long	C_SYMBOL_NAME(__memcpy_arm) - 1b - PC_OFS
+.Lmemcpy_neon:
+	.long	C_SYMBOL_NAME(__memcpy_neon) - 1b - PC_OFS
+.Lmemcpy_vfp:
+	.long	C_SYMBOL_NAME(__memcpy_vfp) - 1b - PC_OFS
+
+END(memcpy)
+
+libc_hidden_builtin_def (memcpy)
+
+#undef libc_hidden_builtin_def
+#define libc_hidden_builtin_def(name)
+#undef weak_alias
+#define weak_alias(x, y)
+#undef libc_hidden_def
+#define libc_hidden_def(name)
+
+#define memcpy __memcpy_arm
+
+#endif
+
+#include "memcpy_impl.S"
diff --git a/ports/sysdeps/arm/armv7/multiarch/memcpy_impl.S b/ports/sysdeps/arm/armv7/multiarch/memcpy_impl.S
new file mode 100644
index 0000000..2c466d25
--- /dev/null
+++ b/ports/sysdeps/arm/armv7/multiarch/memcpy_impl.S
@@ -0,0 +1,600 @@ 
+/* NEON/VFP/ARM version of memcpy optimized for Cortex-A15.
+   Copyright (C) 2013 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.
+
+   This memcpy routine is optimised for Cortex-A15 cores and takes advantage
+   of VFP or NEON when built with the appropriate flags.
+
+   Assumptions:
+
+    ARMv6 (ARMv7-a if using Neon)
+    ARM state
+    Unaligned accesses
+    LDRD/STRD support unaligned word accesses
+
+ */
+
+#include <sysdep.h>
+
+	.syntax unified
+	/* This implementation requires ARM state.  */
+	.arm
+
+#ifdef MEMCPY_NEON
+
+	.fpu	neon
+	.arch	armv7-a
+# define FRAME_SIZE	4
+# define USE_VFP
+# define USE_NEON
+
+#elif defined (MEMCPY_VFP)
+
+	.arch	armv6
+	.fpu	vfpv2
+# define FRAME_SIZE	32
+# define USE_VFP
+
+#else
+	.arch	armv6
+# define FRAME_SIZE    32
+
+#endif
+
+/* Old versions of GAS incorrectly implement the NEON align semantics.  */
+#ifdef BROKEN_ASM_NEON_ALIGN
+#define ALIGN(addr, align) addr,:align
+#else
+#define ALIGN(addr, align) addr:align
+#endif
+
+#define PC_OFFSET	8	/* PC pipeline compensation.  */
+#define INSN_SIZE	4
+
+/* Call parameters.  */
+#define dstin	r0
+#define src	r1
+#define count	r2
+
+/* Locals.  */
+#define tmp1	r3
+#define dst	ip
+#define tmp2	r10
+
+#ifndef USE_NEON
+/* For bulk copies using GP registers.  */
+#define	A_l	r2		/* Call-clobbered.  */
+#define	A_h	r3		/* Call-clobbered.  */
+#define	B_l	r4
+#define	B_h	r5
+#define	C_l	r6
+#define	C_h	r7
+#define	D_l	r8
+#define	D_h	r9
+#endif
+
+/* Number of lines ahead to pre-fetch data.  If you change this the code
+   below will need adjustment to compensate.  */
+
+#define prefetch_lines	5
+
+#ifdef USE_VFP
+	.macro	cpy_line_vfp vreg, base
+	vstr	\vreg, [dst, #\base]
+	vldr	\vreg, [src, #\base]
+	vstr	d0, [dst, #\base + 8]
+	vldr	d0, [src, #\base + 8]
+	vstr	d1, [dst, #\base + 16]
+	vldr	d1, [src, #\base + 16]
+	vstr	d2, [dst, #\base + 24]
+	vldr	d2, [src, #\base + 24]
+	vstr	\vreg, [dst, #\base + 32]
+	vldr	\vreg, [src, #\base + prefetch_lines * 64 - 32]
+	vstr	d0, [dst, #\base + 40]
+	vldr	d0, [src, #\base + 40]
+	vstr	d1, [dst, #\base + 48]
+	vldr	d1, [src, #\base + 48]
+	vstr	d2, [dst, #\base + 56]
+	vldr	d2, [src, #\base + 56]
+	.endm
+
+	.macro	cpy_tail_vfp vreg, base
+	vstr	\vreg, [dst, #\base]
+	vldr	\vreg, [src, #\base]
+	vstr	d0, [dst, #\base + 8]
+	vldr	d0, [src, #\base + 8]
+	vstr	d1, [dst, #\base + 16]
+	vldr	d1, [src, #\base + 16]
+	vstr	d2, [dst, #\base + 24]
+	vldr	d2, [src, #\base + 24]
+	vstr	\vreg, [dst, #\base + 32]
+	vstr	d0, [dst, #\base + 40]
+	vldr	d0, [src, #\base + 40]
+	vstr	d1, [dst, #\base + 48]
+	vldr	d1, [src, #\base + 48]
+	vstr	d2, [dst, #\base + 56]
+	vldr	d2, [src, #\base + 56]
+	.endm
+#endif
+
+	.p2align 6
+ENTRY(memcpy)
+
+	mov	dst, dstin	/* Preserve dstin, we need to return it.  */
+	cmp	count, #64
+	bge	.Lcpy_not_short
+	/* Deal with small copies quickly by dropping straight into the
+	   exit block.  */
+
+.Ltail63unaligned:
+#ifdef USE_NEON
+	and	tmp1, count, #0x38
+	rsb	tmp1, tmp1, #(56 - PC_OFFSET + INSN_SIZE)
+	add	pc, pc, tmp1
+	vld1.8	{d0}, [src]!	/* 14 words to go.  */
+	vst1.8	{d0}, [dst]!
+	vld1.8	{d0}, [src]!	/* 12 words to go.  */
+	vst1.8	{d0}, [dst]!
+	vld1.8	{d0}, [src]!	/* 10 words to go.  */
+	vst1.8	{d0}, [dst]!
+	vld1.8	{d0}, [src]!	/* 8 words to go.  */
+	vst1.8	{d0}, [dst]!
+	vld1.8	{d0}, [src]!	/* 6 words to go.  */
+	vst1.8	{d0}, [dst]!
+	vld1.8	{d0}, [src]!	/* 4 words to go.  */
+	vst1.8	{d0}, [dst]!
+	vld1.8	{d0}, [src]!	/* 2 words to go.  */
+	vst1.8	{d0}, [dst]!
+
+	tst	count, #4
+	ldrne	tmp1, [src], #4
+	strne	tmp1, [dst], #4
+#else
+	/* Copy up to 15 full words of data.  May not be aligned.  */
+	/* Cannot use VFP for unaligned data.  */
+	and	tmp1, count, #0x3c
+	add	dst, dst, tmp1
+	add	src, src, tmp1
+	rsb	tmp1, tmp1, #(60 - PC_OFFSET/2 + INSN_SIZE/2)
+	/* Jump directly into the sequence below at the correct offset.  */
+	add	pc, pc, tmp1, lsl #1
+
+	ldr	tmp1, [src, #-60]	/* 15 words to go.  */
+	str	tmp1, [dst, #-60]
+
+	ldr	tmp1, [src, #-56]	/* 14 words to go.  */
+	str	tmp1, [dst, #-56]
+	ldr	tmp1, [src, #-52]
+	str	tmp1, [dst, #-52]
+
+	ldr	tmp1, [src, #-48]	/* 12 words to go.  */
+	str	tmp1, [dst, #-48]
+	ldr	tmp1, [src, #-44]
+	str	tmp1, [dst, #-44]
+
+	ldr	tmp1, [src, #-40]	/* 10 words to go.  */
+	str	tmp1, [dst, #-40]
+	ldr	tmp1, [src, #-36]
+	str	tmp1, [dst, #-36]
+
+	ldr	tmp1, [src, #-32]	/* 8 words to go.  */
+	str	tmp1, [dst, #-32]
+	ldr	tmp1, [src, #-28]
+	str	tmp1, [dst, #-28]
+
+	ldr	tmp1, [src, #-24]	/* 6 words to go.  */
+	str	tmp1, [dst, #-24]
+	ldr	tmp1, [src, #-20]
+	str	tmp1, [dst, #-20]
+
+	ldr	tmp1, [src, #-16]	/* 4 words to go.  */
+	str	tmp1, [dst, #-16]
+	ldr	tmp1, [src, #-12]
+	str	tmp1, [dst, #-12]
+
+	ldr	tmp1, [src, #-8]	/* 2 words to go.  */
+	str	tmp1, [dst, #-8]
+	ldr	tmp1, [src, #-4]
+	str	tmp1, [dst, #-4]
+#endif
+
+	lsls	count, count, #31
+	ldrhcs	tmp1, [src], #2
+	ldrbne	src, [src]		/* Src is dead, use as a scratch.  */
+	strhcs	tmp1, [dst], #2
+	strbne	src, [dst]
+	bx	lr
+
+.Lcpy_not_short:
+	/* At least 64 bytes to copy, but don't know the alignment yet.  */
+	str	tmp2, [sp, #-FRAME_SIZE]!
+	and	tmp2, src, #3
+	and	tmp1, dst, #3
+	cmp	tmp1, tmp2
+	bne	.Lcpy_notaligned
+
+#ifdef USE_VFP
+	/* Magic dust alert!  Force VFP on Cortex-A9.  Experiments show
+	   that the FP pipeline is much better at streaming loads and
+	   stores.  This is outside the critical loop.  */
+	vmov.f32	s0, s0
+#endif
+
+	/* SRC and DST have the same mutual 32-bit alignment, but we may
+	   still need to pre-copy some bytes to get to natural alignment.
+	   We bring DST into full 64-bit alignment.  */
+	lsls	tmp2, dst, #29
+	beq	1f
+	rsbs	tmp2, tmp2, #0
+	sub	count, count, tmp2, lsr #29
+	ldrmi	tmp1, [src], #4
+	strmi	tmp1, [dst], #4
+	lsls	tmp2, tmp2, #2
+	ldrhcs	tmp1, [src], #2
+	ldrbne	tmp2, [src], #1
+	strhcs	tmp1, [dst], #2
+	strbne	tmp2, [dst], #1
+
+1:
+	subs	tmp2, count, #64	/* Use tmp2 for count.  */
+	blt	.Ltail63aligned
+
+	cmp	tmp2, #512
+	bge	.Lcpy_body_long
+
+.Lcpy_body_medium:			/* Count in tmp2.  */
+#ifdef USE_VFP
+1:
+	vldr	d0, [src, #0]
+	subs	tmp2, tmp2, #64
+	vldr	d1, [src, #8]
+	vstr	d0, [dst, #0]
+	vldr	d0, [src, #16]
+	vstr	d1, [dst, #8]
+	vldr	d1, [src, #24]
+	vstr	d0, [dst, #16]
+	vldr	d0, [src, #32]
+	vstr	d1, [dst, #24]
+	vldr	d1, [src, #40]
+	vstr	d0, [dst, #32]
+	vldr	d0, [src, #48]
+	vstr	d1, [dst, #40]
+	vldr	d1, [src, #56]
+	vstr	d0, [dst, #48]
+	add	src, src, #64
+	vstr	d1, [dst, #56]
+	add	dst, dst, #64
+	bge	1b
+	tst	tmp2, #0x3f
+	beq	.Ldone
+
+.Ltail63aligned:			/* Count in tmp2.  */
+	and	tmp1, tmp2, #0x38
+	add	dst, dst, tmp1
+	add	src, src, tmp1
+	rsb	tmp1, tmp1, #(56 - PC_OFFSET + INSN_SIZE)
+	add	pc, pc, tmp1
+
+	vldr	d0, [src, #-56]	/* 14 words to go.  */
+	vstr	d0, [dst, #-56]
+	vldr	d0, [src, #-48]	/* 12 words to go.  */
+	vstr	d0, [dst, #-48]
+	vldr	d0, [src, #-40]	/* 10 words to go.  */
+	vstr	d0, [dst, #-40]
+	vldr	d0, [src, #-32]	/* 8 words to go.  */
+	vstr	d0, [dst, #-32]
+	vldr	d0, [src, #-24]	/* 6 words to go.  */
+	vstr	d0, [dst, #-24]
+	vldr	d0, [src, #-16]	/* 4 words to go.  */
+	vstr	d0, [dst, #-16]
+	vldr	d0, [src, #-8]	/* 2 words to go.  */
+	vstr	d0, [dst, #-8]
+#else
+	sub	src, src, #8
+	sub	dst, dst, #8
+1:
+	ldrd	A_l, A_h, [src, #8]
+	strd	A_l, A_h, [dst, #8]
+	ldrd	A_l, A_h, [src, #16]
+	strd	A_l, A_h, [dst, #16]
+	ldrd	A_l, A_h, [src, #24]
+	strd	A_l, A_h, [dst, #24]
+	ldrd	A_l, A_h, [src, #32]
+	strd	A_l, A_h, [dst, #32]
+	ldrd	A_l, A_h, [src, #40]
+	strd	A_l, A_h, [dst, #40]
+	ldrd	A_l, A_h, [src, #48]
+	strd	A_l, A_h, [dst, #48]
+	ldrd	A_l, A_h, [src, #56]
+	strd	A_l, A_h, [dst, #56]
+	ldrd	A_l, A_h, [src, #64]!
+	strd	A_l, A_h, [dst, #64]!
+	subs	tmp2, tmp2, #64
+	bge	1b
+	tst	tmp2, #0x3f
+	bne	1f
+	ldr	tmp2,[sp], #FRAME_SIZE
+	bx	lr
+1:
+	add	src, src, #8
+	add	dst, dst, #8
+
+.Ltail63aligned:			/* Count in tmp2.  */
+	/* Copy up to 7 d-words of data.  Similar to Ltail63unaligned, but
+	   we know that the src and dest are 32-bit aligned so we can use
+	   LDRD/STRD to improve efficiency.  */
+	/* TMP2 is now negative, but we don't care about that.  The bottom
+	   six bits still tell us how many bytes are left to copy.  */
+
+	and	tmp1, tmp2, #0x38
+	add	dst, dst, tmp1
+	add	src, src, tmp1
+	rsb	tmp1, tmp1, #(56 - PC_OFFSET + INSN_SIZE)
+	add	pc, pc, tmp1
+	ldrd	A_l, A_h, [src, #-56]	/* 14 words to go.  */
+	strd	A_l, A_h, [dst, #-56]
+	ldrd	A_l, A_h, [src, #-48]	/* 12 words to go.  */
+	strd	A_l, A_h, [dst, #-48]
+	ldrd	A_l, A_h, [src, #-40]	/* 10 words to go.  */
+	strd	A_l, A_h, [dst, #-40]
+	ldrd	A_l, A_h, [src, #-32]	/* 8 words to go.  */
+	strd	A_l, A_h, [dst, #-32]
+	ldrd	A_l, A_h, [src, #-24]	/* 6 words to go.  */
+	strd	A_l, A_h, [dst, #-24]
+	ldrd	A_l, A_h, [src, #-16]	/* 4 words to go.  */
+	strd	A_l, A_h, [dst, #-16]
+	ldrd	A_l, A_h, [src, #-8]	/* 2 words to go.  */
+	strd	A_l, A_h, [dst, #-8]
+
+#endif
+	tst	tmp2, #4
+	ldrne	tmp1, [src], #4
+	strne	tmp1, [dst], #4
+	lsls	tmp2, tmp2, #31		/* Count (tmp2) now dead. */
+	ldrhcs	tmp1, [src], #2
+	ldrbne	tmp2, [src]
+	strhcs	tmp1, [dst], #2
+	strbne	tmp2, [dst]
+
+.Ldone:
+	ldr	tmp2, [sp], #FRAME_SIZE
+	bx	lr
+
+.Lcpy_body_long:			/* Count in tmp2.  */
+
+	/* Long copy.  We know that there's at least (prefetch_lines * 64)
+	   bytes to go.  */
+#ifdef USE_VFP
+	/* Don't use PLD.  Instead, read some data in advance of the current
+	   copy position into a register.  This should act like a PLD
+	   operation but we won't have to repeat the transfer.  */
+
+	vldr	d3, [src, #0]
+	vldr	d4, [src, #64]
+	vldr	d5, [src, #128]
+	vldr	d6, [src, #192]
+	vldr	d7, [src, #256]
+
+	vldr	d0, [src, #8]
+	vldr	d1, [src, #16]
+	vldr	d2, [src, #24]
+	add	src, src, #32
+
+	subs	tmp2, tmp2, #prefetch_lines * 64 * 2
+	blt	2f
+1:
+	cpy_line_vfp	d3, 0
+	cpy_line_vfp	d4, 64
+	cpy_line_vfp	d5, 128
+	add	dst, dst, #3 * 64
+	add	src, src, #3 * 64
+	cpy_line_vfp	d6, 0
+	cpy_line_vfp	d7, 64
+	add	dst, dst, #2 * 64
+	add	src, src, #2 * 64
+	subs	tmp2, tmp2, #prefetch_lines * 64
+	bge	1b
+
+2:
+	cpy_tail_vfp	d3, 0
+	cpy_tail_vfp	d4, 64
+	cpy_tail_vfp	d5, 128
+	add	src, src, #3 * 64
+	add	dst, dst, #3 * 64
+	cpy_tail_vfp	d6, 0
+	vstr	d7, [dst, #64]
+	vldr	d7, [src, #64]
+	vstr	d0, [dst, #64 + 8]
+	vldr	d0, [src, #64 + 8]
+	vstr	d1, [dst, #64 + 16]
+	vldr	d1, [src, #64 + 16]
+	vstr	d2, [dst, #64 + 24]
+	vldr	d2, [src, #64 + 24]
+	vstr	d7, [dst, #64 + 32]
+	add	src, src, #96
+	vstr	d0, [dst, #64 + 40]
+	vstr	d1, [dst, #64 + 48]
+	vstr	d2, [dst, #64 + 56]
+	add	dst, dst, #128
+	add	tmp2, tmp2, #prefetch_lines * 64
+	b	.Lcpy_body_medium
+#else
+	/* Long copy.  Use an SMS style loop to maximize the I/O
+	   bandwidth of the core.  We don't have enough spare registers
+	   to synthesise prefetching, so use PLD operations.  */
+	/* Pre-bias src and dst.  */
+	sub	src, src, #8
+	sub	dst, dst, #8
+	pld	[src, #8]
+	pld	[src, #72]
+	subs	tmp2, tmp2, #64
+	pld	[src, #136]
+	ldrd	A_l, A_h, [src, #8]
+	strd	B_l, B_h, [sp, #8]
+	ldrd	B_l, B_h, [src, #16]
+	strd	C_l, C_h, [sp, #16]
+	ldrd	C_l, C_h, [src, #24]
+	strd	D_l, D_h, [sp, #24]
+	pld	[src, #200]
+	ldrd	D_l, D_h, [src, #32]!
+	b	1f
+	.p2align	6
+2:
+	pld	[src, #232]
+	strd	A_l, A_h, [dst, #40]
+	ldrd	A_l, A_h, [src, #40]
+	strd	B_l, B_h, [dst, #48]
+	ldrd	B_l, B_h, [src, #48]
+	strd	C_l, C_h, [dst, #56]
+	ldrd	C_l, C_h, [src, #56]
+	strd	D_l, D_h, [dst, #64]!
+	ldrd	D_l, D_h, [src, #64]!
+	subs	tmp2, tmp2, #64
+1:
+	strd	A_l, A_h, [dst, #8]
+	ldrd	A_l, A_h, [src, #8]
+	strd	B_l, B_h, [dst, #16]
+	ldrd	B_l, B_h, [src, #16]
+	strd	C_l, C_h, [dst, #24]
+	ldrd	C_l, C_h, [src, #24]
+	strd	D_l, D_h, [dst, #32]
+	ldrd	D_l, D_h, [src, #32]
+	bcs	2b
+	/* Save the remaining bytes and restore the callee-saved regs.  */
+	strd	A_l, A_h, [dst, #40]
+	add	src, src, #40
+	strd	B_l, B_h, [dst, #48]
+	ldrd	B_l, B_h, [sp, #8]
+	strd	C_l, C_h, [dst, #56]
+	ldrd	C_l, C_h, [sp, #16]
+	strd	D_l, D_h, [dst, #64]
+	ldrd	D_l, D_h, [sp, #24]
+	add	dst, dst, #72
+	tst	tmp2, #0x3f
+	bne	.Ltail63aligned
+	ldr	tmp2, [sp], #FRAME_SIZE
+	bx	lr
+#endif
+
+.Lcpy_notaligned:
+	pld	[src]
+	pld	[src, #64]
+	/* There's at least 64 bytes to copy, but there is no mutual
+	   alignment.  */
+	/* Bring DST to 64-bit alignment.  */
+	lsls	tmp2, dst, #29
+	pld	[src, #(2 * 64)]
+	beq	1f
+	rsbs	tmp2, tmp2, #0
+	sub	count, count, tmp2, lsr #29
+	ldrmi	tmp1, [src], #4
+	strmi	tmp1, [dst], #4
+	lsls	tmp2, tmp2, #2
+	ldrbne	tmp1, [src], #1
+	ldrhcs	tmp2, [src], #2
+	strbne	tmp1, [dst], #1
+	strhcs	tmp2, [dst], #2
+1:
+	pld	[src, #(3 * 64)]
+	subs	count, count, #64
+	ldrmi	tmp2, [sp], #FRAME_SIZE
+	bmi	.Ltail63unaligned
+	pld	[src, #(4 * 64)]
+
+#ifdef USE_NEON
+	vld1.8	{d0-d3}, [src]!
+	vld1.8	{d4-d7}, [src]!
+	subs	count, count, #64
+	bmi	2f
+1:
+	pld	[src, #(4 * 64)]
+	vst1.8	{d0-d3}, [ALIGN (dst, 64)]!
+	vld1.8	{d0-d3}, [src]!
+	vst1.8	{d4-d7}, [ALIGN (dst, 64)]!
+	vld1.8	{d4-d7}, [src]!
+	subs	count, count, #64
+	bpl	1b
+2:
+	vst1.8	{d0-d3}, [ALIGN (dst, 64)]!
+	vst1.8	{d4-d7}, [ALIGN (dst, 64)]!
+	ands	count, count, #0x3f
+#else
+	/* Use an SMS style loop to maximize the I/O bandwidth.  */
+	sub	src, src, #4
+	sub	dst, dst, #8
+	subs	tmp2, count, #64	/* Use tmp2 for count.  */
+	ldr	A_l, [src, #4]
+	ldr	A_h, [src, #8]
+	strd	B_l, B_h, [sp, #8]
+	ldr	B_l, [src, #12]
+	ldr	B_h, [src, #16]
+	strd	C_l, C_h, [sp, #16]
+	ldr	C_l, [src, #20]
+	ldr	C_h, [src, #24]
+	strd	D_l, D_h, [sp, #24]
+	ldr	D_l, [src, #28]
+	ldr	D_h, [src, #32]!
+	b	1f
+	.p2align	6
+2:
+	pld	[src, #(5 * 64) - (32 - 4)]
+	strd	A_l, A_h, [dst, #40]
+	ldr	A_l, [src, #36]
+	ldr	A_h, [src, #40]
+	strd	B_l, B_h, [dst, #48]
+	ldr	B_l, [src, #44]
+	ldr	B_h, [src, #48]
+	strd	C_l, C_h, [dst, #56]
+	ldr	C_l, [src, #52]
+	ldr	C_h, [src, #56]
+	strd	D_l, D_h, [dst, #64]!
+	ldr	D_l, [src, #60]
+	ldr	D_h, [src, #64]!
+	subs	tmp2, tmp2, #64
+1:
+	strd	A_l, A_h, [dst, #8]
+	ldr	A_l, [src, #4]
+	ldr	A_h, [src, #8]
+	strd	B_l, B_h, [dst, #16]
+	ldr	B_l, [src, #12]
+	ldr	B_h, [src, #16]
+	strd	C_l, C_h, [dst, #24]
+	ldr	C_l, [src, #20]
+	ldr	C_h, [src, #24]
+	strd	D_l, D_h, [dst, #32]
+	ldr	D_l, [src, #28]
+	ldr	D_h, [src, #32]
+	bcs	2b
+
+	/* Save the remaining bytes and restore the callee-saved regs.  */
+	strd	A_l, A_h, [dst, #40]
+	add	src, src, #36
+	strd	B_l, B_h, [dst, #48]
+	ldrd	B_l, B_h, [sp, #8]
+	strd	C_l, C_h, [dst, #56]
+	ldrd	C_l, C_h, [sp, #16]
+	strd	D_l, D_h, [dst, #64]
+	ldrd	D_l, D_h, [sp, #24]
+	add	dst, dst, #72
+	ands	count, tmp2, #0x3f
+#endif
+	ldr	tmp2, [sp], #FRAME_SIZE
+	bne	.Ltail63unaligned
+	bx	lr
+
+END(memcpy)
+libc_hidden_builtin_def (memcpy)
diff --git a/ports/sysdeps/arm/armv7/multiarch/memcpy_neon.S b/ports/sysdeps/arm/armv7/multiarch/memcpy_neon.S
new file mode 100644
index 0000000..c0ef1f8
--- /dev/null
+++ b/ports/sysdeps/arm/armv7/multiarch/memcpy_neon.S
@@ -0,0 +1,3 @@ 
+#define MEMCPY_NEON
+#define memcpy __memcpy_neon
+#include "memcpy_impl.S"
diff --git a/ports/sysdeps/arm/armv7/multiarch/memcpy_vfp.S b/ports/sysdeps/arm/armv7/multiarch/memcpy_vfp.S
new file mode 100644
index 0000000..d21b702
--- /dev/null
+++ b/ports/sysdeps/arm/armv7/multiarch/memcpy_vfp.S
@@ -0,0 +1,3 @@ 
+#define MEMCPY_VFP
+#define memcpy __memcpy_vfp
+#include "memcpy_impl.S"