diff mbox series

[v7,26/27] tcg: enable MTTCG by default for ARM on x86 hosts

Message ID 20170119170507.16185-27-alex.bennee@linaro.org
State Superseded
Headers show
Series Remaining MTTCG Base patches and ARM enablement | expand

Commit Message

Alex Bennée Jan. 19, 2017, 5:05 p.m. UTC
This enables the multi-threaded system emulation by default for ARMv7
and ARMv8 guests using the x86_64 TCG backend. This is because on the
guest side:

  - The ARM translate.c/translate-64.c have been converted to
    - use MTTCG safe atomic primitives
    - emit the appropriate barrier ops
  - The ARM machine has been updated to
    - hold the BQL when modifying shared cross-vCPU state
    - defer cpu_reset to async safe work

All the host backends support the barrier and atomic primitives but
need to provide same-or-better support for normal load/store
operations.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>


---
v7
  - drop configure check for backend
  - declare backend memory order for x86
  - declare guest memory order for ARM
  - add configure snippet to set TARGET_SUPPORTS_MTTCG
---
 configure             |  6 ++++++
 target/arm/cpu.h      |  3 +++
 tcg/i386/tcg-target.h | 16 ++++++++++++++++
 3 files changed, 25 insertions(+)

-- 
2.11.0

Comments

Pranith Kumar Jan. 20, 2017, 12:08 a.m. UTC | #1
Alex Bennée writes:

> This enables the multi-threaded system emulation by default for ARMv7

> and ARMv8 guests using the x86_64 TCG backend. This is because on the

> guest side:

>

>   - The ARM translate.c/translate-64.c have been converted to

>     - use MTTCG safe atomic primitives

>     - emit the appropriate barrier ops

>   - The ARM machine has been updated to

>     - hold the BQL when modifying shared cross-vCPU state

>     - defer cpu_reset to async safe work

>

> All the host backends support the barrier and atomic primitives but

> need to provide same-or-better support for normal load/store

> operations.

>

> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>


<snip>

>

> +/* This defines the natural memory order supported by this

> + * architecture before guarantees made by various barrier

> + * instructions.

> + *

> + * The x86 has a pretty strong memory ordering which only really

> + * allows for some stores to be re-ordered after loads.

> + */

> +#include "tcg-mo.h"

> +

> +static inline int get_tcg_target_mo(void)

> +{

> +    return TCG_MO_ALL & ~TCG_MO_LD_ST;

> +}

> +


Shouldn't this be TCG_MO_ALL & ~TCG_MO_ST_LD?

Thanks,
-- 
Pranith
Alex Bennée Jan. 20, 2017, 10:53 a.m. UTC | #2
Pranith Kumar <bobby.prani@gmail.com> writes:

> Alex Bennée writes:

>

>> This enables the multi-threaded system emulation by default for ARMv7

>> and ARMv8 guests using the x86_64 TCG backend. This is because on the

>> guest side:

>>

>>   - The ARM translate.c/translate-64.c have been converted to

>>     - use MTTCG safe atomic primitives

>>     - emit the appropriate barrier ops

>>   - The ARM machine has been updated to

>>     - hold the BQL when modifying shared cross-vCPU state

>>     - defer cpu_reset to async safe work

>>

>> All the host backends support the barrier and atomic primitives but

>> need to provide same-or-better support for normal load/store

>> operations.

>>

>> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>

>

> <snip>

>

>>

>> +/* This defines the natural memory order supported by this

>> + * architecture before guarantees made by various barrier

>> + * instructions.

>> + *

>> + * The x86 has a pretty strong memory ordering which only really

>> + * allows for some stores to be re-ordered after loads.

>> + */

>> +#include "tcg-mo.h"

>> +

>> +static inline int get_tcg_target_mo(void)

>> +{

>> +    return TCG_MO_ALL & ~TCG_MO_LD_ST;

>> +}

>> +

>

> Shouldn't this be TCG_MO_ALL & ~TCG_MO_ST_LD?


The case that x86 doesn't handle normally is store-after-load which is
what I assumed TCG_MO_LD_ST was. Perhaps we need some better comments
for each of the enums?

>

> Thanks,



--
Alex Bennée
Pranith Kumar Jan. 20, 2017, 2:30 p.m. UTC | #3
On Fri, Jan 20, 2017 at 5:53 AM, Alex Bennée <alex.bennee@linaro.org> wrote:
>

> The case that x86 doesn't handle normally is store-after-load which is

> what I assumed TCG_MO_LD_ST was. Perhaps we need some better comments

> for each of the enums?

>


OK. The enum is of the form TCG_MO_A_B, where A and B are in program
order. So x86 will be TCG_MO_ST_LD, i.e., a load following a store is
re-ordered before the store.

I'll send a patch adding this comment.

Thanks,
-- 
Pranith
diff mbox series

Patch

diff --git a/configure b/configure
index 17d52cdd74..a23245fdf4 100755
--- a/configure
+++ b/configure
@@ -5881,6 +5881,7 @@  mkdir -p $target_dir
 echo "# Automatically generated by configure - do not modify" > $config_target_mak
 
 bflt="no"
+mttcg="no"
 interp_prefix1=$(echo "$interp_prefix" | sed "s/%M/$target_name/g")
 gdb_xml_files=""
 
@@ -5899,11 +5900,13 @@  case "$target_name" in
   arm|armeb)
     TARGET_ARCH=arm
     bflt="yes"
+    mttcg="yes"
     gdb_xml_files="arm-core.xml arm-vfp.xml arm-vfp3.xml arm-neon.xml"
   ;;
   aarch64)
     TARGET_BASE_ARCH=arm
     bflt="yes"
+    mttcg="yes"
     gdb_xml_files="aarch64-core.xml aarch64-fpu.xml arm-core.xml arm-vfp.xml arm-vfp3.xml arm-neon.xml"
   ;;
   cris)
@@ -6055,6 +6058,9 @@  if test "$target_bigendian" = "yes" ; then
 fi
 if test "$target_softmmu" = "yes" ; then
   echo "CONFIG_SOFTMMU=y" >> $config_target_mak
+  if test "$mttcg" = "yes" ; then
+    echo "TARGET_SUPPORTS_MTTCG=y" >> $config_target_mak
+  fi
 fi
 if test "$target_user_only" = "yes" ; then
   echo "CONFIG_USER_ONLY=y" >> $config_target_mak
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 29d15fc522..659e246a54 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -29,6 +29,9 @@ 
 #  define TARGET_LONG_BITS 32
 #endif
 
+/* ARM processors have a weak memory model */
+#define TCG_DEFAULT_MO      (0)
+
 #define CPUArchState struct CPUARMState
 
 #include "qemu-common.h"
diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h
index 21d96ec35c..536190f647 100644
--- a/tcg/i386/tcg-target.h
+++ b/tcg/i386/tcg-target.h
@@ -165,4 +165,20 @@  static inline void flush_icache_range(uintptr_t start, uintptr_t stop)
 {
 }
 
+/* This defines the natural memory order supported by this
+ * architecture before guarantees made by various barrier
+ * instructions.
+ *
+ * The x86 has a pretty strong memory ordering which only really
+ * allows for some stores to be re-ordered after loads.
+ */
+#include "tcg-mo.h"
+
+static inline int get_tcg_target_mo(void)
+{
+    return TCG_MO_ALL & ~TCG_MO_LD_ST;
+}
+
+#define TCG_TARGET_DEFAULT_MO get_tcg_target_mo()
+
 #endif