From patchwork Fri Jun 3 20:40:26 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 69284 Delivered-To: patch@linaro.org Received: by 10.140.106.246 with SMTP id e109csp450856qgf; Fri, 3 Jun 2016 13:52:15 -0700 (PDT) X-Received: by 10.237.37.67 with SMTP id w3mr5274406qtc.75.1464987135240; Fri, 03 Jun 2016 13:52:15 -0700 (PDT) Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id p54si4618174qta.8.2016.06.03.13.52.15 for (version=TLS1 cipher=AES128-SHA bits=128/128); Fri, 03 Jun 2016 13:52:15 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom=qemu-devel-bounces+patch=linaro.org@nongnu.org; dmarc=fail (p=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:57681 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b8w4k-00029w-Mh for patch@linaro.org; Fri, 03 Jun 2016 16:52:14 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:37623) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b8vth-0000BA-Jl for qemu-devel@nongnu.org; Fri, 03 Jun 2016 16:40:54 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1b8vtf-0000cP-GV for qemu-devel@nongnu.org; Fri, 03 Jun 2016 16:40:49 -0400 Received: from mail-wm0-x234.google.com ([2a00:1450:400c:c09::234]:34433) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b8vtf-0000bu-6C for qemu-devel@nongnu.org; Fri, 03 Jun 2016 16:40:47 -0400 Received: by mail-wm0-x234.google.com with SMTP id z87so11748191wmh.1 for ; Fri, 03 Jun 2016 13:40:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9g9vSe3nQeSGx6tDLMkwIzZM72PjOmsFciWhEMn/E0g=; b=h6C6rsxk4pdhwMJnj1SOkP+L3zfTfuPTTPFfgHJWSYBX/fQi9tt1+EsOODo2YDIYlj QejEgRXRkCKyS58eAM4lX0No4+83Ho4nLekT29RUBdYYiMnCJCZ9p42+DNCliA6O1zn5 5OiBVio4JmdOHwAgp+rof8KaGkAuEJEXeM1fc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9g9vSe3nQeSGx6tDLMkwIzZM72PjOmsFciWhEMn/E0g=; b=AW2f8jew264lP7oKSEJyuhxLMvy8mg7Va+alt4vcN3Fy05WNoRCYsuoY20EbiM7orr ZbS/7FcugE8kYk0NSrqqqFQUfigX3FJQ4rv49WnE5KvATw02nPiVWqDxKDksiSRjPoKf Ys93p2dxo58nJWE/Wisu93pVKLNINT0vNl+5n9ZnDt3wyR3OgJu6SY477kRe8cTYqUKy adfLsaZ6BSw1y/UajRdSz59pVADBYOPbibIW8lXOSlNJKxVg/QgFKyuX/Rvma2kE188M 2pWdaQfR6gks3KVCIlwGN1aa3YRq/fvNKS13aa+pOVh2Kk/T+clKqH4fdzK/67ctN2Hs jiXg== X-Gm-Message-State: ALyK8tLzIB+v5zy4dR9eFC0gSm1UGslyg0PCTXCMNpbJUCEx9DUcIaGD4AINTKWL6whzJVBx X-Received: by 10.194.248.97 with SMTP id yl1mr5814900wjc.130.1464986446322; Fri, 03 Jun 2016 13:40:46 -0700 (PDT) Received: from zen.linaro.local ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id o203sm1194529wmg.22.2016.06.03.13.40.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 03 Jun 2016 13:40:41 -0700 (PDT) Received: from zen.linaroharston (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id 8ED9A3E319A; Fri, 3 Jun 2016 21:40:40 +0100 (BST) From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: mttcg@listserver.greensocs.com, qemu-devel@nongnu.org, fred.konrad@greensocs.com, a.rigo@virtualopensystems.com, serge.fdrv@gmail.com, cota@braap.org, bobby.prani@gmail.com Date: Fri, 3 Jun 2016 21:40:26 +0100 Message-Id: <1464986428-6739-18-git-send-email-alex.bennee@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1464986428-6739-1-git-send-email-alex.bennee@linaro.org> References: <1464986428-6739-1-git-send-email-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::234 Subject: [Qemu-devel] [RFC v3 17/19] tcg: enable thread-per-vCPU X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, Peter Crosthwaite , claudio.fontana@huawei.com, Riku Voipio , mark.burton@greensocs.com, jan.kiszka@siemens.com, pbonzini@redhat.com, =?UTF-8?q?Alex=20Benn=C3=A9e?= , rth@twiddle.net Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" There are a number of changes that occur at the same time here: - tb_lock is no longer a NOP for SoftMMU The tb_lock protects both translation and memory map structures. The debug assert is updated to reflect this. - introduce a single vCPU qemu_tcg_cpu_thread_fn One of these is spawned per vCPU with its own Thread and Condition variables. qemu_tcg_single_cpu_thread_fn is the new name for the old single threaded function. - the TLS current_cpu variable is now live for the lifetime of MTTCG vCPU threads. This is for future work where async jobs need to know the vCPU context they are operating in. The user to switch on multi-thread behaviour and spawn a thread per-vCPU. For a simple test like: ./arm/run ./arm/locking-test.flat -smp 4 -accel tcg,thread=multi Will now use 4 vCPU threads and have an expected FAIL (instead of the unexpected PASS) as the default mode of the test has no protection when incrementing a shared variable. However we still default to a single thread for all vCPUs as individual front-end and back-ends need additional fixes to safely support: - atomic behaviour - tb invalidation - memory ordering The function default_mttcg_enabled can be tweaked as support is added. Signed-off-by: KONRAD Frederic Signed-off-by: Paolo Bonzini [AJB: Some fixes, conditionally, commit rewording] Signed-off-by: Alex Bennée --- v1 (ajb): - fix merge conflicts - maintain single-thread approach v2 - re-base fixes (no longer has tb_find_fast lock tweak ahead) - remove bogus break condition on cpu->stop/stopped - only process exiting cpus exit_request - handle all cpus idle case (fixes shutdown issues) - sleep on EXCP_HALTED in mttcg mode (prevent crash on start-up) - move icount timer into helper v3 - update the commit message - rm kick_timer tweaks (move to earlier tcg_current_cpu tweaks) - ensure linux-user clears cpu->exit_request in loop - purging of global exit_request and tcg_current_cpu in earlier patches - fix checkpatch warnings --- cpu-exec.c | 8 ---- cpus.c | 122 ++++++++++++++++++++++++++++++++++++++++-------------- linux-user/main.c | 1 + translate-all.c | 18 +++----- 4 files changed, 98 insertions(+), 51 deletions(-) -- 2.7.4 diff --git a/cpu-exec.c b/cpu-exec.c index e1fb9ca..5ad3865 100644 --- a/cpu-exec.c +++ b/cpu-exec.c @@ -297,7 +297,6 @@ static TranslationBlock *tb_find_slow(CPUState *cpu, goto found; } -#ifdef CONFIG_USER_ONLY /* mmap_lock is needed by tb_gen_code, and mmap_lock must be * taken outside tb_lock. Since we're momentarily dropping * tb_lock, there's a chance that our desired tb has been @@ -311,14 +310,11 @@ static TranslationBlock *tb_find_slow(CPUState *cpu, mmap_unlock(); goto found; } -#endif /* if no translated code available, then translate it now */ tb = tb_gen_code(cpu, pc, cs_base, flags, 0); -#ifdef CONFIG_USER_ONLY mmap_unlock(); -#endif found: /* we add the TB in the virtual pc hash table */ @@ -523,7 +519,6 @@ static inline void cpu_handle_interrupt(CPUState *cpu, } if (unlikely(cpu->exit_request || replay_has_interrupt())) { - cpu->exit_request = 0; cpu->exception_index = EXCP_INTERRUPT; cpu_loop_exit(cpu); } @@ -661,8 +656,5 @@ int cpu_exec(CPUState *cpu) cc->cpu_exec_exit(cpu); rcu_read_unlock(); - /* fail safe : never use current_cpu outside cpu_exec() */ - current_cpu = NULL; - return ret; } diff --git a/cpus.c b/cpus.c index 35374fd..419caa2 100644 --- a/cpus.c +++ b/cpus.c @@ -962,10 +962,7 @@ void run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data) qemu_cpu_kick(cpu); while (!atomic_mb_read(&wi.done)) { - CPUState *self_cpu = current_cpu; - qemu_cond_wait(&qemu_work_cond, &qemu_global_mutex); - current_cpu = self_cpu; } } @@ -1027,13 +1024,13 @@ static void flush_queued_work(CPUState *cpu) static void qemu_wait_io_event_common(CPUState *cpu) { + atomic_mb_set(&cpu->thread_kicked, false); if (cpu->stop) { cpu->stop = false; cpu->stopped = true; qemu_cond_broadcast(&qemu_pause_cond); } flush_queued_work(cpu); - cpu->thread_kicked = false; } static void qemu_tcg_wait_io_event(CPUState *cpu) @@ -1042,9 +1039,7 @@ static void qemu_tcg_wait_io_event(CPUState *cpu) qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex); } - CPU_FOREACH(cpu) { - qemu_wait_io_event_common(cpu); - } + qemu_wait_io_event_common(cpu); } static void qemu_kvm_wait_io_event(CPUState *cpu) @@ -1111,6 +1106,7 @@ static void *qemu_dummy_cpu_thread_fn(void *arg) qemu_thread_get_self(cpu->thread); cpu->thread_id = qemu_get_thread_id(); cpu->can_do_io = 1; + current_cpu = cpu; sigemptyset(&waitset); sigaddset(&waitset, SIG_IPI); @@ -1119,9 +1115,7 @@ static void *qemu_dummy_cpu_thread_fn(void *arg) cpu->created = true; qemu_cond_signal(&qemu_cpu_cond); - current_cpu = cpu; while (1) { - current_cpu = NULL; qemu_mutex_unlock_iothread(); do { int sig; @@ -1132,7 +1126,6 @@ static void *qemu_dummy_cpu_thread_fn(void *arg) exit(1); } qemu_mutex_lock_iothread(); - current_cpu = cpu; qemu_wait_io_event_common(cpu); } @@ -1249,7 +1242,7 @@ static void kick_tcg_thread(void *opaque) qemu_cpu_kick_rr_cpu(); } -static void *qemu_tcg_cpu_thread_fn(void *arg) +static void *qemu_tcg_single_cpu_thread_fn(void *arg) { CPUState *cpu = arg; QEMUTimer *kick_timer; @@ -1331,6 +1324,69 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) return NULL; } +/* Multi-threaded TCG + * + * In the multi-threaded case each vCPU has its own thread. The TLS + * variable current_cpu can be used deep in the code to find the + * current CPUState for a given thread. + */ + +static void *qemu_tcg_cpu_thread_fn(void *arg) +{ + CPUState *cpu = arg; + + rcu_register_thread(); + + qemu_mutex_lock_iothread(); + qemu_thread_get_self(cpu->thread); + + cpu->thread_id = qemu_get_thread_id(); + cpu->created = true; + cpu->can_do_io = 1; + current_cpu = cpu; + qemu_cond_signal(&qemu_cpu_cond); + + /* process any pending work */ + atomic_mb_set(&cpu->exit_request, 1); + + while (1) { + bool sleep = false; + + if (cpu_can_run(cpu)) { + int r = tcg_cpu_exec(cpu); + switch (r) { + case EXCP_DEBUG: + cpu_handle_guest_debug(cpu); + break; + case EXCP_HALTED: + /* during start-up the vCPU is reset and the thread is + * kicked several times. If we don't ensure we go back + * to sleep in the halted state we won't cleanly + * start-up when the vCPU is enabled. + */ + sleep = true; + break; + default: + /* Ignore everything else? */ + break; + } + } else { + sleep = true; + } + + handle_icount_deadline(); + + if (sleep) { + qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex); + } + + atomic_mb_set(&cpu->exit_request, 0); + qemu_tcg_wait_io_event(cpu); + } + + return NULL; +} + static void qemu_cpu_kick_thread(CPUState *cpu) { #ifndef _WIN32 @@ -1355,7 +1411,7 @@ void qemu_cpu_kick(CPUState *cpu) qemu_cond_broadcast(cpu->halt_cond); if (tcg_enabled()) { cpu_exit(cpu); - /* Also ensure current RR cpu is kicked */ + /* NOP unless doing single-thread RR */ qemu_cpu_kick_rr_cpu(); } else { qemu_cpu_kick_thread(cpu); @@ -1422,13 +1478,6 @@ void pause_all_vcpus(void) if (qemu_in_vcpu_thread()) { cpu_stop_current(); - if (!kvm_enabled()) { - CPU_FOREACH(cpu) { - cpu->stop = false; - cpu->stopped = true; - } - return; - } } while (!all_vcpus_paused()) { @@ -1462,29 +1511,42 @@ void resume_all_vcpus(void) static void qemu_tcg_init_vcpu(CPUState *cpu) { char thread_name[VCPU_THREAD_NAME_SIZE]; - static QemuCond *tcg_halt_cond; - static QemuThread *tcg_cpu_thread; + static QemuCond *single_tcg_halt_cond; + static QemuThread *single_tcg_cpu_thread; - /* share a single thread for all cpus with TCG */ - if (!tcg_cpu_thread) { + if (qemu_tcg_mttcg_enabled() || !single_tcg_cpu_thread) { cpu->thread = g_malloc0(sizeof(QemuThread)); cpu->halt_cond = g_malloc0(sizeof(QemuCond)); qemu_cond_init(cpu->halt_cond); - tcg_halt_cond = cpu->halt_cond; - snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/TCG", + + if (qemu_tcg_mttcg_enabled()) { + /* create a thread per vCPU with TCG (MTTCG) */ + snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/TCG", cpu->cpu_index); - qemu_thread_create(cpu->thread, thread_name, qemu_tcg_cpu_thread_fn, - cpu, QEMU_THREAD_JOINABLE); + + qemu_thread_create(cpu->thread, thread_name, qemu_tcg_cpu_thread_fn, + cpu, QEMU_THREAD_JOINABLE); + + } else { + /* share a single thread for all cpus with TCG */ + snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "ALL CPUs/TCG"); + qemu_thread_create(cpu->thread, thread_name, + qemu_tcg_single_cpu_thread_fn, + cpu, QEMU_THREAD_JOINABLE); + + single_tcg_halt_cond = cpu->halt_cond; + single_tcg_cpu_thread = cpu->thread; + } #ifdef _WIN32 cpu->hThread = qemu_thread_get_handle(cpu->thread); #endif while (!cpu->created) { qemu_cond_wait(&qemu_cpu_cond, &qemu_global_mutex); } - tcg_cpu_thread = cpu->thread; } else { - cpu->thread = tcg_cpu_thread; - cpu->halt_cond = tcg_halt_cond; + /* For non-MTTCG cases we share the thread */ + cpu->thread = single_tcg_cpu_thread; + cpu->halt_cond = single_tcg_halt_cond; } } diff --git a/linux-user/main.c b/linux-user/main.c index b2bc6ab..522a1d7 100644 --- a/linux-user/main.c +++ b/linux-user/main.c @@ -207,6 +207,7 @@ static inline void cpu_exec_end(CPUState *cpu) } exclusive_idle(); pthread_mutex_unlock(&exclusive_lock); + cpu->exit_request = false; } void cpu_list_lock(void) diff --git a/translate-all.c b/translate-all.c index 4bc5718..95e5284 100644 --- a/translate-all.c +++ b/translate-all.c @@ -83,7 +83,11 @@ #endif #ifdef CONFIG_SOFTMMU -#define assert_memory_lock() do { /* nothing */ } while (0) +#define assert_memory_lock() do { \ + if (DEBUG_MEM_LOCKS) { \ + g_assert(have_tb_lock); \ + } \ + } while (0) #else #define assert_memory_lock() do { \ if (DEBUG_MEM_LOCKS) { \ @@ -147,36 +151,28 @@ static void *l1_map[V_L1_SIZE]; TCGContext tcg_ctx; /* translation block context */ -#ifdef CONFIG_USER_ONLY __thread int have_tb_lock; -#endif void tb_lock(void) { -#ifdef CONFIG_USER_ONLY assert(!have_tb_lock); qemu_mutex_lock(&tcg_ctx.tb_ctx.tb_lock); have_tb_lock++; -#endif } void tb_unlock(void) { -#ifdef CONFIG_USER_ONLY assert(have_tb_lock); have_tb_lock--; qemu_mutex_unlock(&tcg_ctx.tb_ctx.tb_lock); -#endif } void tb_lock_reset(void) { -#ifdef CONFIG_USER_ONLY if (have_tb_lock) { qemu_mutex_unlock(&tcg_ctx.tb_ctx.tb_lock); have_tb_lock = 0; } -#endif } #ifdef DEBUG_LOCKING @@ -185,15 +181,11 @@ void tb_lock_reset(void) #define DEBUG_TB_LOCKS 0 #endif -#ifdef CONFIG_SOFTMMU -#define assert_tb_lock() do { /* nothing */ } while (0) -#else #define assert_tb_lock() do { \ if (DEBUG_TB_LOCKS) { \ g_assert(have_tb_lock); \ } \ } while (0) -#endif static TranslationBlock *tb_find_pc(uintptr_t tc_ptr);