From patchwork Thu Oct 27 15:10:09 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 79721 Delivered-To: patch@linaro.org Received: by 10.80.142.83 with SMTP id 19csp709085edx; Thu, 27 Oct 2016 08:30:58 -0700 (PDT) X-Received: by 10.13.193.70 with SMTP id c67mr6808227ywd.25.1477582258189; Thu, 27 Oct 2016 08:30:58 -0700 (PDT) Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id g5si1352217ywc.278.2016.10.27.08.30.57 for (version=TLS1 cipher=AES128-SHA bits=128/128); Thu, 27 Oct 2016 08:30:58 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom=qemu-devel-bounces+patch=linaro.org@nongnu.org; dmarc=fail (p=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:42301 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bzmdt-0003ir-5d for patch@linaro.org; Thu, 27 Oct 2016 11:30:57 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:42564) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bzmKn-0004g0-If for qemu-devel@nongnu.org; Thu, 27 Oct 2016 11:11:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bzmKj-0007fO-CY for qemu-devel@nongnu.org; Thu, 27 Oct 2016 11:11:13 -0400 Received: from mail-wm0-x233.google.com ([2a00:1450:400c:c09::233]:36017) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1bzmKj-0007eu-5e for qemu-devel@nongnu.org; Thu, 27 Oct 2016 11:11:09 -0400 Received: by mail-wm0-x233.google.com with SMTP id b80so40029550wme.1 for ; Thu, 27 Oct 2016 08:11:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Lb7MfL4F4X9ICKSwQja+CjN9uPqCYeEipnH/Ib+FzQA=; b=EPPhWs4o/bxoABSfKnzuRMbntzBfM5qPAyLdwgpYLR8OBXaI7VmmaFu4Nfk51QGrNz GeF7SaTxXdCTMfw7+eBSwgjJHB5aZdfBRDbYSAxQDi4KsKtHWX4J2TL5vdnDKWyLxs1J F4V4ucgU/4Sj++QuNNrv2hiuS+/kwGYEwist8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Lb7MfL4F4X9ICKSwQja+CjN9uPqCYeEipnH/Ib+FzQA=; b=C4yO+XeohkhFPyshvTv2uvT+uz1dL4hLEWDKUDscSg3xd2twD+NqE4tzcNcFJjuTur fEoMRv03LZ16CNEIlNLp0WuqZ0qiltR4rqHsZoJNCFAc/OfG2faQGZqh6ZSmuJKaMqXo qoV7CyzKEfrfo+e2/YuC3RSvWqt/0mXYpNjKh3wN6Pyy9HhjpJJvEC6oqiKER5qO3VH8 npj5U2LSBxEOJEzuan/nTpLjA26fjoc5Rbu1eKws0lbHPdqQruVIRFUb2qeUUePvZzrg AzsxIYejnHiHNRkROw2Uu/rJzkpW/Lj02NLhbRrO4EJ9xURYtrPj2XpyH+Vzrdfbi8D9 NdFA== X-Gm-Message-State: ABUngvciO0BF0ZymIouDYTI8Gh/KdMj7Pnr3jfD3wURS1Qa66HQyl3ExZ7XLVHz2JrLsxgUJ X-Received: by 10.28.209.75 with SMTP id i72mr8011010wmg.56.1477581068052; Thu, 27 Oct 2016 08:11:08 -0700 (PDT) Received: from zen.linaro.local ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id l19sm3802089wmg.5.2016.10.27.08.11.02 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 Oct 2016 08:11:04 -0700 (PDT) Received: from zen.linaroharston (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id F3E173E033A; Thu, 27 Oct 2016 16:10:59 +0100 (BST) From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: pbonzini@redhat.com Date: Thu, 27 Oct 2016 16:10:09 +0100 Message-Id: <20161027151030.20863-13-alex.bennee@linaro.org> X-Mailer: git-send-email 2.10.1 In-Reply-To: <20161027151030.20863-1-alex.bennee@linaro.org> References: <20161027151030.20863-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::233 Subject: [Qemu-devel] [PATCH v5 12/33] tcg: cpus rm tcg_exec_all() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mttcg@listserver.greensocs.com, peter.maydell@linaro.org, claudio.fontana@huawei.com, nikunj@linux.vnet.ibm.com, Peter Crosthwaite , jan.kiszka@siemens.com, mark.burton@greensocs.com, a.rigo@virtualopensystems.com, qemu-devel@nongnu.org, cota@braap.org, serge.fdrv@gmail.com, bobby.prani@gmail.com, rth@twiddle.net, =?UTF-8?q?Alex=20Benn=C3=A9e?= , fred.konrad@greensocs.com Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" In preparation for multi-threaded TCG we remove tcg_exec_all and move all the CPU cycling into the main thread function. When MTTCG is enabled we shall use a separate thread function which only handles one vCPU. Signed-off-by: Alex Bennée Reviewed-by: Sergey Fedorov Reviewed-by: Richard Henderson --- v2 - update timer calls to new API on rebase v3 - move tcg_cpu_exec above thread function, drop static fwd declaration v4 - split mechanical moves into earlier patch - moved unplug logic info function, don't break smp boot --- cpus.c | 87 +++++++++++++++++++++++++++++++++--------------------------------- 1 file changed, 43 insertions(+), 44 deletions(-) -- 2.10.1 diff --git a/cpus.c b/cpus.c index 77cc24b..cc49902 100644 --- a/cpus.c +++ b/cpus.c @@ -69,7 +69,6 @@ #endif /* CONFIG_LINUX */ -static CPUState *next_cpu; int64_t max_delay; int64_t max_advance; @@ -1119,46 +1118,26 @@ static int tcg_cpu_exec(CPUState *cpu) return ret; } -static void tcg_exec_all(void) +/* Destroy any remaining vCPUs which have been unplugged and have + * finished running + */ +static void deal_with_unplugged_cpus(void) { - int r; - - /* Account partial waits to QEMU_CLOCK_VIRTUAL. */ - qemu_account_warp_timer(); - - if (next_cpu == NULL) { - next_cpu = first_cpu; - } - for (; next_cpu != NULL && !exit_request; next_cpu = CPU_NEXT(next_cpu)) { - CPUState *cpu = next_cpu; - - qemu_clock_enable(QEMU_CLOCK_VIRTUAL, - (cpu->singlestep_enabled & SSTEP_NOTIMER) == 0); + CPUState *cpu; - if (cpu_can_run(cpu)) { - r = tcg_cpu_exec(cpu); - if (r == EXCP_DEBUG) { - cpu_handle_guest_debug(cpu); - break; - } else if (r == EXCP_ATOMIC) { - cpu_exec_step_atomic(cpu); - } - } else if (cpu->stop || cpu->stopped) { - if (cpu->unplug) { - next_cpu = CPU_NEXT(cpu); - } + CPU_FOREACH(cpu) { + if (cpu->unplug && !cpu_can_run(cpu)) { + qemu_tcg_destroy_vcpu(cpu); + cpu->created = false; + qemu_cond_signal(&qemu_cpu_cond); break; } } - - /* Pairs with smp_wmb in qemu_cpu_kick. */ - atomic_mb_set(&exit_request, 0); } static void *qemu_tcg_cpu_thread_fn(void *arg) { CPUState *cpu = arg; - CPUState *remove_cpu = NULL; rcu_register_thread(); @@ -1185,8 +1164,39 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) /* process any pending work */ atomic_mb_set(&exit_request, 1); + cpu = first_cpu; + while (1) { - tcg_exec_all(); + /* Account partial waits to QEMU_CLOCK_VIRTUAL. */ + qemu_account_warp_timer(); + + if (!cpu) { + cpu = first_cpu; + } + + for (; cpu != NULL && !exit_request; cpu = CPU_NEXT(cpu)) { + + qemu_clock_enable(QEMU_CLOCK_VIRTUAL, + (cpu->singlestep_enabled & SSTEP_NOTIMER) == 0); + + if (cpu_can_run(cpu)) { + int r; + r = tcg_cpu_exec(cpu); + if (r == EXCP_DEBUG) { + cpu_handle_guest_debug(cpu); + break; + } + } else if (cpu->stop || cpu->stopped) { + if (cpu->unplug) { + cpu = CPU_NEXT(cpu); + } + break; + } + + } /* for cpu.. */ + + /* Pairs with smp_wmb in qemu_cpu_kick. */ + atomic_mb_set(&exit_request, 0); if (use_icount) { int64_t deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL); @@ -1196,18 +1206,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) } } qemu_tcg_wait_io_event(QTAILQ_FIRST(&cpus)); - CPU_FOREACH(cpu) { - if (cpu->unplug && !cpu_can_run(cpu)) { - remove_cpu = cpu; - break; - } - } - if (remove_cpu) { - qemu_tcg_destroy_vcpu(remove_cpu); - cpu->created = false; - qemu_cond_signal(&qemu_cpu_cond); - remove_cpu = NULL; - } + deal_with_unplugged_cpus(); } return NULL;