diff mbox series

[v2,07/16] target/arm: Align vector registers

Message ID 20170912162513.21694-8-richard.henderson@linaro.org
State Superseded
Headers show
Series TCG vectorization and example conversion | expand

Commit Message

Richard Henderson Sept. 12, 2017, 4:25 p.m. UTC
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

---
 target/arm/cpu.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

-- 
2.13.5

Comments

Philippe Mathieu-Daudé Sept. 12, 2017, 6:50 p.m. UTC | #1
Hi Richard,

On 09/12/2017 01:25 PM, Richard Henderson wrote:
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

> ---

>   target/arm/cpu.h | 2 +-

>   1 file changed, 1 insertion(+), 1 deletion(-)

> 

> diff --git a/target/arm/cpu.h b/target/arm/cpu.h

> index 98b9b26fd3..419f008277 100644

> --- a/target/arm/cpu.h

> +++ b/target/arm/cpu.h

> @@ -486,7 +486,7 @@ typedef struct CPUARMState {

>            * the two execution states, and means we do not need to explicitly

>            * map these registers when changing states.

>            */

> -        float64 regs[64];

> +        float64 regs[64] __attribute__((aligned(16)));


I understand this should be aligned to the biggest vector register the 
host support, i.e. for AVX-512 this would be QEMU_ALIGNED(64), is it 
correct?

I'd rather use a #define such HOST_VECTOR_LENGTH_BITS_MAX and 
QEMU_ALIGNED(HOST_VECTOR_LENGTH_BITS_MAX / BITS_PER_BYTE) or directly 
QEMU_ALIGNED(HOST_VECTOR_LENGTH_MAX), using the define makes it 
self-explanatory. Or shorter:

         float64 regs[64] QEMU_ALIGNED(HOST_VECTOR_SIZE);

What do you think?

Regards,

Phil.

>   

>           uint32_t xregs[16];

>           /* We store these fpcsr fields separately for convenience.  */

>
Peter Maydell Sept. 12, 2017, 6:55 p.m. UTC | #2
On 12 September 2017 at 17:25, Richard Henderson
<richard.henderson@linaro.org> wrote:
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

> ---

>  target/arm/cpu.h | 2 +-

>  1 file changed, 1 insertion(+), 1 deletion(-)

>

> diff --git a/target/arm/cpu.h b/target/arm/cpu.h

> index 98b9b26fd3..419f008277 100644

> --- a/target/arm/cpu.h

> +++ b/target/arm/cpu.h

> @@ -486,7 +486,7 @@ typedef struct CPUARMState {

>           * the two execution states, and means we do not need to explicitly

>           * map these registers when changing states.

>           */

> -        float64 regs[64];

> +        float64 regs[64] __attribute__((aligned(16)));

>

>          uint32_t xregs[16];

>          /* We store these fpcsr fields separately for convenience.  */

> --

> 2.13.5


I notice we have a QEMU_ALIGNED() macro to wrap the __attribute__,
though we use it less often than not at the moment...

thanks
-- PMM
Philippe Mathieu-Daudé Sept. 12, 2017, 8:17 p.m. UTC | #3
On 09/12/2017 03:55 PM, Peter Maydell wrote:
> I notice we have a QEMU_ALIGNED() macro to wrap the __attribute__,

> though we use it less often than not at the moment...


Aesthetic aside, I find it useful to deal with the 80 characters style 
limit.
Peter Maydell Sept. 12, 2017, 8:20 p.m. UTC | #4
On 12 September 2017 at 21:17, Philippe Mathieu-Daudé <f4bug@amsat.org> wrote:
> On 09/12/2017 03:55 PM, Peter Maydell wrote:

>>

>> I notice we have a QEMU_ALIGNED() macro to wrap the __attribute__,

>> though we use it less often than not at the moment...

>

>

> Aesthetic aside, I find it useful to deal with the 80 characters style

> limit.


They do say that constraints are vital for art :-)

thanks
-- PMM
Philippe Mathieu-Daudé Sept. 12, 2017, 8:44 p.m. UTC | #5
>> -        float64 regs[64];

>> +        float64 regs[64] __attribute__((aligned(16)));

> 

> I understand this should be aligned to the biggest vector register the 

> host support, i.e. for AVX-512 this would be QEMU_ALIGNED(64), is it 

> correct?

> 


checking datashits:

"INTEL® ADVANCED VECTOR EXTENSIONS"

2.5 MEMORY ALIGNMENT

With the exception of explicitly aligned 16 or 32 byte SIMD load/store 
instructions, most VEX-encoded, arithmetic and data processing 
instructions operate in a flexible environment regarding memory address 
alignment, i.e. VEX-encoded instruction with 32-byte or 16-byte load 
semantics will support unaligned load operation by default. Memory 
arguments for most instructions with VEX prefix operate normally without 
causing #GP(0) on any byte-granularity alignment (unlike Legacy SSE 
instructions). The instructions that require explicit memory alignment 
requirements are listed in Table 2-4.

Table 2-4. Instructions Requiring Explicitly Aligned Memory

Require 32-byte alignment:
   VMOVDQA ymm, m256
   VMOVDQA m256, ymm
   VMOVAPS ymm, m256
   VMOVAPS m256, ymm
   VMOVAPD ymm, m256
   VMOVAPD m256, ymm
   VMOVNTPS m256, ymm
   VMOVNTPD m256, ymm
   VMOVNTDQ m256, ymm
   VMOVNTDQA ymm, m256

General Protection, #GP(0):
   VEX.256: Memory operand is not 32-byte aligned
   VEX.128: Memory operand is not 16-byte aligned
   Legacy SSE: Memory operand is not 16-byte aligned

--

"Intel® Architecture Instruction Set Extensions Programming Reference"

2.6 MEMORY ALIGNMENT

Memory alignment requirements on EVEX-encoded SIMD instructions are 
similar to VEX-encoded SIMD instructions. Memory alignment applies to 
EVEX-encoded SIMD instructions in three categories:
• Explicitly-aligned SIMD load and store instructions accessing 64 bytes 
of memory with EVEX prefix encoded vector length of 512 bits (e.g., 
VMOVAPD, VMOVAPS, VMOVDQA, etc.). These instructions always require
memory address to be aligned on 64-byte boundary.
• Explicitly-unaligned SIMD load and store instructions accessing 64 
bytes or less of data from memory (e.g. VMOVUPD, VMOVUPS, VMOVDQU, 
VMOVQ, VMOVD, etc.). These instructions do not require memory address
to be aligned on natural vector-length byte boundary.
• Most arithmetic and data processing instructions encoded using EVEX 
support memory access semantics. When these instructions access from 
memory, there are no alignment restrictions.
[...]
AVX-512 instructions may generate an #AC(0) fault on misaligned 4 or 
8-byte memory references in Ring-3 when CR0.AM=1. 16, 32 and 64-byte 
memory references will not generate #AC(0) fault. See Table 2-7 for details.
Certain AVX-512 Foundation instructions always require 64-byte alignment 
(see the complete list of VEX and EVEX encoded instructions in Table 
2-6). These instructions will #GP(0) if not aligned to 64-byte boundaries.
Richard Henderson Sept. 13, 2017, 3:28 p.m. UTC | #6
On 09/12/2017 11:50 AM, Philippe Mathieu-Daudé wrote:
>>

>> -        float64 regs[64];

>> +        float64 regs[64] __attribute__((aligned(16)));

> 

> I understand this should be aligned to the biggest vector register the host

> support, i.e. for AVX-512 this would be QEMU_ALIGNED(64), is it correct?


No.

Alignment of 16 is sufficient for "older" vector extensions, like altivec,
which require alignment in load/store insns.  But (so far at least) newer
vector extensions with larger vector sizes (AVX2, AVX512, ARM SVE) handle
unaligned load/store operations just fine.

Which means we need not require excessive alignment within the cpu struct.

The rule for this is documented in tcg/tcg-op-gvec.h, iirc.


r~
diff mbox series

Patch

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 98b9b26fd3..419f008277 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -486,7 +486,7 @@  typedef struct CPUARMState {
          * the two execution states, and means we do not need to explicitly
          * map these registers when changing states.
          */
-        float64 regs[64];
+        float64 regs[64] __attribute__((aligned(16)));
 
         uint32_t xregs[16];
         /* We store these fpcsr fields separately for convenience.  */