From patchwork Fri Jan 27 10:39:03 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 92614 Delivered-To: patch@linaro.org Received: by 10.182.3.34 with SMTP id 2csp146215obz; Fri, 27 Jan 2017 03:01:26 -0800 (PST) X-Received: by 10.55.16.219 with SMTP id 88mr7036379qkq.29.1485514886705; Fri, 27 Jan 2017 03:01:26 -0800 (PST) Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id g10si3266505qtg.251.2017.01.27.03.01.26 for (version=TLS1 cipher=AES128-SHA bits=128/128); Fri, 27 Jan 2017 03:01:26 -0800 (PST) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom=qemu-devel-bounces+patch=linaro.org@nongnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:44352 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cX4HU-0006c2-4w for patch@linaro.org; Fri, 27 Jan 2017 06:01:24 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48794) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cX3wJ-0003Ke-Uw for qemu-devel@nongnu.org; Fri, 27 Jan 2017 05:39:34 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cX3wI-0003Vd-R7 for qemu-devel@nongnu.org; Fri, 27 Jan 2017 05:39:32 -0500 Received: from mail-wm0-x22b.google.com ([2a00:1450:400c:c09::22b]:37600) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1cX3wI-0003VL-La for qemu-devel@nongnu.org; Fri, 27 Jan 2017 05:39:30 -0500 Received: by mail-wm0-x22b.google.com with SMTP id c206so130178294wme.0 for ; Fri, 27 Jan 2017 02:39:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZI58x1tTKAUx1hXutBDgrwqAewo271LU8BBdXl8gsgc=; b=X7SroGthicnIXoerTSqQ9LFfOwe4g79DfuGKLir3ysXAEexTVpqBX4heOY51PAPPIa S4LA31FvBpr+wnVpGHl06/HfNwYLhO56yZZHYMR4/QO9rnHpX/y5kBILI0W6arPKYZCV MueJTL/X7pVxJevM+wkm7P51LLtj99eprhIF8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZI58x1tTKAUx1hXutBDgrwqAewo271LU8BBdXl8gsgc=; b=BSaNgFxAdpgx4FAYzuTnqG2BhQw1lDZeLMnrNSS18GdHKrgN+h6kbn10T1u3w/mVHF XMoyl0zuNMhNHSXtvBwTCYkzNmPpIHuDgqVH8nbXHAD7XMslJ9ks4SoZlJGlqZwZfBdY +k7Zta3EGwhHwNj0PI2r0gmkYSL8m+xQsXqkQXIYhC8X50ZNa+IibocMCgPMR3TmuDeU yjamgkGWL3n8gH6S1v9WqYDPv2kekZoSODG+FTh0WxMZBDH+09L0HIv4z6yxgcLP4F37 b03/+6ypeA5KMbaStcMGNhx2RzvscRA0p1csTAzdFKut9i/ATRdp7LFlqOCL4pjUm14N wxGg== X-Gm-Message-State: AIkVDXKNY8EEekcKtyvoDb2IE9gPGOvOjkjrBM8hk68FTTClqBVae3eo1fh9Gx02BJfK855X X-Received: by 10.28.7.1 with SMTP id 1mr2347806wmh.73.1485513569514; Fri, 27 Jan 2017 02:39:29 -0800 (PST) Received: from zen.linaro.local ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id r5sm3085481wme.23.2017.01.27.02.39.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 27 Jan 2017 02:39:25 -0800 (PST) Received: from zen.linaroharston (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id DE08F3E374F; Fri, 27 Jan 2017 10:39:22 +0000 (GMT) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: mttcg@listserver.greensocs.com, qemu-devel@nongnu.org, fred.konrad@greensocs.com, a.rigo@virtualopensystems.com, cota@braap.org, bobby.prani@gmail.com, nikunj@linux.vnet.ibm.com Date: Fri, 27 Jan 2017 10:39:03 +0000 Message-Id: <20170127103922.19658-7-alex.bennee@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170127103922.19658-1-alex.bennee@linaro.org> References: <20170127103922.19658-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::22b Subject: [Qemu-devel] [PATCH v8 06/25] tcg: add kick timer for single-threaded vCPU emulation X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, claudio.fontana@huawei.com, Peter Crosthwaite , jan.kiszka@siemens.com, mark.burton@greensocs.com, serge.fdrv@gmail.com, pbonzini@redhat.com, =?utf-8?q?Alex_Benn=C3=A9e?= , bamvor.zhangjian@linaro.org, rth@twiddle.net Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Currently we rely on the side effect of the main loop grabbing the iothread_mutex to give any long running basic block chains a kick to ensure the next vCPU is scheduled. As this code is being re-factored and rationalised we now do it explicitly here. Signed-off-by: Alex Bennée Reviewed-by: Richard Henderson --- v2 - re-base fixes - get_ticks_per_sec() -> NANOSECONDS_PER_SEC v3 - add define for TCG_KICK_FREQ - fix checkpatch warning v4 - wrap next calc in inline qemu_tcg_next_kick() instead of macro v5 - move all kick code into own section - use global for timer - add helper functions to start/stop timer - stop timer when all cores paused v7 - checkpatch > 80 char fix --- cpus.c | 61 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 61 insertions(+) -- 2.11.0 Reviewed-by: Pranith Kumar diff --git a/cpus.c b/cpus.c index 76b6e04332..a98925105c 100644 --- a/cpus.c +++ b/cpus.c @@ -767,6 +767,53 @@ void configure_icount(QemuOpts *opts, Error **errp) } /***********************************************************/ +/* TCG vCPU kick timer + * + * The kick timer is responsible for moving single threaded vCPU + * emulation on to the next vCPU. If more than one vCPU is running a + * timer event with force a cpu->exit so the next vCPU can get + * scheduled. + * + * The timer is removed if all vCPUs are idle and restarted again once + * idleness is complete. + */ + +static QEMUTimer *tcg_kick_vcpu_timer; + +static void qemu_cpu_kick_no_halt(void); + +#define TCG_KICK_PERIOD (NANOSECONDS_PER_SECOND / 10) + +static inline int64_t qemu_tcg_next_kick(void) +{ + return qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + TCG_KICK_PERIOD; +} + +static void kick_tcg_thread(void *opaque) +{ + timer_mod(tcg_kick_vcpu_timer, qemu_tcg_next_kick()); + qemu_cpu_kick_no_halt(); +} + +static void start_tcg_kick_timer(void) +{ + if (!tcg_kick_vcpu_timer && CPU_NEXT(first_cpu)) { + tcg_kick_vcpu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, + kick_tcg_thread, NULL); + timer_mod(tcg_kick_vcpu_timer, qemu_tcg_next_kick()); + } +} + +static void stop_tcg_kick_timer(void) +{ + if (tcg_kick_vcpu_timer) { + timer_del(tcg_kick_vcpu_timer); + tcg_kick_vcpu_timer = NULL; + } +} + + +/***********************************************************/ void hw_error(const char *fmt, ...) { va_list ap; @@ -1020,9 +1067,12 @@ static void qemu_wait_io_event_common(CPUState *cpu) static void qemu_tcg_wait_io_event(CPUState *cpu) { while (all_cpu_threads_idle()) { + stop_tcg_kick_timer(); qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex); } + start_tcg_kick_timer(); + while (iothread_requesting_mutex) { qemu_cond_wait(&qemu_io_proceeded_cond, &qemu_global_mutex); } @@ -1222,6 +1272,15 @@ static void deal_with_unplugged_cpus(void) } } +/* Single-threaded TCG + * + * In the single-threaded case each vCPU is simulated in turn. If + * there is more than a single vCPU we create a simple timer to kick + * the vCPU and ensure we don't get stuck in a tight loop in one vCPU. + * This is done explicitly rather than relying on side-effects + * elsewhere. + */ + static void *qemu_tcg_cpu_thread_fn(void *arg) { CPUState *cpu = arg; @@ -1248,6 +1307,8 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) } } + start_tcg_kick_timer(); + /* process any pending work */ atomic_mb_set(&exit_request, 1);