diff mbox

[RFC,v4,14/28] tcg: add kick timer for single-threaded vCPU emulation

Message ID 1470929064-4092-15-git-send-email-alex.bennee@linaro.org
State New
Headers show

Commit Message

Alex Bennée Aug. 11, 2016, 3:24 p.m. UTC
Currently we rely on the side effect of the main loop grabbing the
iothread_mutex to give any long running basic block chains a kick to
ensure the next vCPU is scheduled. As this code is being re-factored and
rationalised we now do it explicitly here.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>


---
v2
  - re-base fixes
  - get_ticks_per_sec() -> NANOSECONDS_PER_SEC
v3
  - add define for TCG_KICK_FREQ
  - fix checkpatch warning
v4
  - wrap next calc in inline qemu_tcg_next_kick() instead of macro
---
 cpus.c | 32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

-- 
2.7.4

Comments

Alex Bennée Sept. 7, 2016, 10:15 a.m. UTC | #1
Paolo Bonzini <pbonzini@redhat.com> writes:

> On 07/09/2016 05:25, Richard Henderson wrote:

>>>

>>> +    /* Set to kick if we have to do more than one vCPU */

>>> +    if (CPU_NEXT(first_cpu)) {

>>> +        kick_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,  kick_tcg_thread,

>>> +                                  &kick_timer);

>>

>> I'm not especially keen on this pointer to local variable thing.

>> Perhaps better as

>>

>>   kick_timer = timer_new_ns(..., NULL);

>>   kick_timer->opaque = kick_timer;

>

> Or put it in CPUState and pass that.


I was trying to avoid expanding CPUState for something that is only
required in one mode of operation. However I appreciate abusing the
stack is a little magical even though we never leave the function.

If setting kick_timer->opaque directly doesn't violate the interface
then I'll do that.

>

> Paolo

>

>> and avoid the indirection in kick_tcg_thread.



--
Alex Bennée
Alex Bennée Sept. 7, 2016, 10:19 a.m. UTC | #2
Richard Henderson <rth@twiddle.net> writes:

> On 08/11/2016 08:24 AM, Alex Bennée wrote:

>> Currently we rely on the side effect of the main loop grabbing the

>> iothread_mutex to give any long running basic block chains a kick to

>> ensure the next vCPU is scheduled. As this code is being re-factored and

>> rationalised we now do it explicitly here.

>>

>> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>

>>

>> ---

>> v2

>>   - re-base fixes

>>   - get_ticks_per_sec() -> NANOSECONDS_PER_SEC

>> v3

>>   - add define for TCG_KICK_FREQ

>>   - fix checkpatch warning

>> v4

>>   - wrap next calc in inline qemu_tcg_next_kick() instead of macro

>> ---

>>  cpus.c | 32 ++++++++++++++++++++++++++++++++

>>  1 file changed, 32 insertions(+)

>>

>> diff --git a/cpus.c b/cpus.c

>> index b5b45b8..8c49d6c 100644

>> --- a/cpus.c

>> +++ b/cpus.c

>> @@ -1185,9 +1185,34 @@ static void deal_with_unplugged_cpus(void)

>>      }

>>  }

>>

>> +/* Single-threaded TCG

>> + *

>> + * In the single-threaded case each vCPU is simulated in turn. If

>> + * there is more than a single vCPU we create a simple timer to kick

>> + * the vCPU and ensure we don't get stuck in a tight loop in one vCPU.

>> + * This is done explicitly rather than relying on side-effects

>> + * elsewhere.

>> + */

>> +static void qemu_cpu_kick_no_halt(void);

>> +

>> +#define TCG_KICK_PERIOD (NANOSECONDS_PER_SECOND / 10)

>> +

>> +static inline int64_t qemu_tcg_next_kick(void)

>> +{

>> +    return qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + TCG_KICK_PERIOD;

>> +}

>> +

>> +static void kick_tcg_thread(void *opaque)

>> +{

>> +    QEMUTimer *self = *(QEMUTimer **) opaque;

>> +    timer_mod(self, qemu_tcg_next_kick());

>> +    qemu_cpu_kick_no_halt();

>> +}

>> +

>>  static void *qemu_tcg_cpu_thread_fn(void *arg)

>>  {

>>      CPUState *cpu = arg;

>> +    QEMUTimer *kick_timer;

>>

>>      rcu_register_thread();

>>

>> @@ -1211,6 +1236,13 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)

>>          }

>>      }

>>

>> +    /* Set to kick if we have to do more than one vCPU */

>> +    if (CPU_NEXT(first_cpu)) {

>> +        kick_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,  kick_tcg_thread,

>> +                                  &kick_timer);

>

> I'm not especially keen on this pointer to local variable thing.  Perhaps better as

>

>    kick_timer = timer_new_ns(..., NULL);

>    kick_timer->opaque = kick_timer;


Yeah it is a bit less magical that way.

>

> and avoid the indirection in kick_tcg_thread.

>

> Also, we should shut down the timer when the cpu is removed, surely?


That is an interesting point - but the timer is shared across all CPUs
so we need to keep kicking until there is only one CPU left. AFAICT the
current cpu hot plug code basically just suspends the vCPU forever
rather than trying to clean up all its associated resources.

>

>

> r~



--
Alex Bennée
diff mbox

Patch

diff --git a/cpus.c b/cpus.c
index b5b45b8..8c49d6c 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1185,9 +1185,34 @@  static void deal_with_unplugged_cpus(void)
     }
 }
 
+/* Single-threaded TCG
+ *
+ * In the single-threaded case each vCPU is simulated in turn. If
+ * there is more than a single vCPU we create a simple timer to kick
+ * the vCPU and ensure we don't get stuck in a tight loop in one vCPU.
+ * This is done explicitly rather than relying on side-effects
+ * elsewhere.
+ */
+static void qemu_cpu_kick_no_halt(void);
+
+#define TCG_KICK_PERIOD (NANOSECONDS_PER_SECOND / 10)
+
+static inline int64_t qemu_tcg_next_kick(void)
+{
+    return qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + TCG_KICK_PERIOD;
+}
+
+static void kick_tcg_thread(void *opaque)
+{
+    QEMUTimer *self = *(QEMUTimer **) opaque;
+    timer_mod(self, qemu_tcg_next_kick());
+    qemu_cpu_kick_no_halt();
+}
+
 static void *qemu_tcg_cpu_thread_fn(void *arg)
 {
     CPUState *cpu = arg;
+    QEMUTimer *kick_timer;
 
     rcu_register_thread();
 
@@ -1211,6 +1236,13 @@  static void *qemu_tcg_cpu_thread_fn(void *arg)
         }
     }
 
+    /* Set to kick if we have to do more than one vCPU */
+    if (CPU_NEXT(first_cpu)) {
+        kick_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,  kick_tcg_thread,
+                                  &kick_timer);
+        timer_mod(kick_timer, qemu_tcg_next_kick());
+    }
+
     /* process any pending work */
     atomic_mb_set(&exit_request, 1);