From patchwork Tue Jan 23 08:54:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Pavel Dovgalyuk X-Patchwork-Id: 125486 Delivered-To: patch@linaro.org Received: by 10.46.66.141 with SMTP id h13csp1633745ljf; Tue, 23 Jan 2018 01:05:23 -0800 (PST) X-Google-Smtp-Source: AH8x225slFdpteRYWu6M1zCN+n1eMseNvaqKRfqHsszAwrpyLyY+cY4v1KGLu/14VB7O63fx+Gh9 X-Received: by 10.129.229.10 with SMTP id s10mr1716470ywl.205.1516698323266; Tue, 23 Jan 2018 01:05:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516698323; cv=none; d=google.com; s=arc-20160816; b=jxb60jAO2U6MQPH5rdGcR9k50dGS6c+VNOiFkIgs4/EjtLqPvpsHyhppbMwAUV1HQW CGsocRNeNi1vhdgevYDSgIYpPn3wiHLP4jRHy66JWZl2aLjhLJmHJPWhRtP7srTXIWgF 1NWhnajKz8rexbWQFZR82c30ZnvBqXWIOXx4M9kAG7QGIKT3fhEbCt/MvfW+OF/T7aSX p1X94uMVcZWpxIoGSo8V0WnpN0uRSD1D6cZGd5ccKYx7pp6N+jg1kvu1jTS+HnIifEAT 6UaPTdYN3deo0qxrlaa0n4PFqy1yA2J5wUayg/CoPcMwM6F015U+4HJVtI4cesrS9xXx aYwA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject :content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:to:from:arc-authentication-results; bh=Ngaf2lylkazXDsKEI9oyh9DZneQJRWWnvK0mxFTW69U=; b=t1wzfEljpvYiKQNEi8cUcJJoiqiDtrIhY9u2QOVTRFSYSOlmJ3N3r4uK2YlhypG0e4 DvzZ7V/Oax/xhGC6RUxeCaucwxaOxL0In3v4+fcRxRzfI3WcmINa5CAQbGCkCjBFc6FX Ngl71qYKNTUjfkJYm65/z3o21BUVYeibHRye2N0kt4YrSxDzlC5zriczY0GpbPMD7lnS WXWw+DRs9rCpB5W0HRIHfe5WsK+6X/JiMpb6J9EMvyraFnasQW0N8wixvQQ8/G2S6Msu OYBNf4vl0cUI4YlODFXRG23bL0MziOX+vzyqJOJn8kTYkMzIWB4CddczsL9HCIRPVWbW J8AA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom=qemu-devel-bounces+patch=linaro.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id v10si2344181ybk.334.2018.01.23.01.05.23 for (version=TLS1 cipher=AES128-SHA bits=128/128); Tue, 23 Jan 2018 01:05:23 -0800 (PST) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom=qemu-devel-bounces+patch=linaro.org@nongnu.org Received: from localhost ([::1]:45984 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eduWA-0002so-NI for patch@linaro.org; Tue, 23 Jan 2018 04:05:22 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:40560) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eduLW-0001iJ-UE for qemu-devel@nongnu.org; Tue, 23 Jan 2018 03:54:24 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eduLV-0008Ux-Ix for qemu-devel@nongnu.org; Tue, 23 Jan 2018 03:54:23 -0500 Received: from mail.ispras.ru ([83.149.199.45]:57202) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eduLV-0008Ul-69 for qemu-devel@nongnu.org; Tue, 23 Jan 2018 03:54:21 -0500 Received: from [127.0.1.1] (unknown [85.142.117.226]) by mail.ispras.ru (Postfix) with ESMTPSA id 3DD8154006A; Tue, 23 Jan 2018 11:54:20 +0300 (MSK) From: Pavel Dovgalyuk To: qemu-devel@nongnu.org Date: Tue, 23 Jan 2018 11:54:21 +0300 Message-ID: <20180123085421.3419.45127.stgit@pasha-VirtualBox> In-Reply-To: <20180123085319.3419.97865.stgit@pasha-VirtualBox> References: <20180123085319.3419.97865.stgit@pasha-VirtualBox> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [fuzzy] X-Received-From: 83.149.199.45 Subject: [Qemu-devel] [RFC PATCH v5 11/24] cpus: push BQL lock to qemu_*_wait_io_event X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, peter.maydell@linaro.org, pavel.dovgaluk@ispras.ru, mst@redhat.com, jasowang@redhat.com, quintela@redhat.com, zuban32s@gmail.com, maria.klimushenkova@ispras.ru, dovgaluk@ispras.ru, kraxel@redhat.com, boost.lists@gmail.com, pbonzini@redhat.com, alex.bennee@linaro.org Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Alex Bennée We only really need to grab the lock for initial setup (so we don't race with the thread-spawning thread). After that we can drop the lock for the whole main loop and only grab it for waiting for IO events. There is a slight wrinkle for the round-robin TCG thread as we also expire timers which needs to be done under BQL as they are in the main-loop. This is stage one of reducing the lock impact as we can drop the requirement of implicit BQL for async work and only grab the lock when we need to sleep on the cpu->halt_cond. Signed-off-by: Alex Bennée Signed-off-by: Pavel Dovgalyuk --- accel/kvm/kvm-all.c | 1 - cpus.c | 29 +++++++++++++++++------------ 2 files changed, 17 insertions(+), 13 deletions(-) diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index 071f4f5..9628512 100644 --- a/accel/kvm/kvm-all.c +++ b/accel/kvm/kvm-all.c @@ -1863,7 +1863,6 @@ int kvm_cpu_exec(CPUState *cpu) qemu_mutex_unlock_iothread(); cpu_exec_start(cpu); - do { MemTxAttrs attrs; diff --git a/cpus.c b/cpus.c index 2cb0af9..577c764 100644 --- a/cpus.c +++ b/cpus.c @@ -1141,6 +1141,7 @@ static void qemu_wait_io_event_common(CPUState *cpu) static void qemu_tcg_rr_wait_io_event(CPUState *cpu) { + qemu_mutex_lock_iothread(); while (all_cpu_threads_idle()) { stop_tcg_kick_timer(); qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex); @@ -1149,10 +1150,13 @@ static void qemu_tcg_rr_wait_io_event(CPUState *cpu) start_tcg_kick_timer(); qemu_wait_io_event_common(cpu); + qemu_mutex_unlock_iothread(); } static void qemu_wait_io_event(CPUState *cpu) { + qemu_mutex_lock_iothread(); + while (cpu_thread_is_idle(cpu)) { qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex); } @@ -1164,6 +1168,7 @@ static void qemu_wait_io_event(CPUState *cpu) } #endif qemu_wait_io_event_common(cpu); + qemu_mutex_unlock_iothread(); } static void *qemu_kvm_cpu_thread_fn(void *arg) @@ -1189,6 +1194,8 @@ static void *qemu_kvm_cpu_thread_fn(void *arg) /* signal CPU creation */ cpu->created = true; + qemu_mutex_unlock_iothread(); + qemu_cond_signal(&qemu_cpu_cond); do { @@ -1204,7 +1211,6 @@ static void *qemu_kvm_cpu_thread_fn(void *arg) qemu_kvm_destroy_vcpu(cpu); cpu->created = false; qemu_cond_signal(&qemu_cpu_cond); - qemu_mutex_unlock_iothread(); return NULL; } @@ -1231,10 +1237,10 @@ static void *qemu_dummy_cpu_thread_fn(void *arg) /* signal CPU creation */ cpu->created = true; + qemu_mutex_unlock_iothread(); qemu_cond_signal(&qemu_cpu_cond); while (1) { - qemu_mutex_unlock_iothread(); do { int sig; r = sigwait(&waitset, &sig); @@ -1243,7 +1249,6 @@ static void *qemu_dummy_cpu_thread_fn(void *arg) perror("sigwait"); exit(1); } - qemu_mutex_lock_iothread(); qemu_wait_io_event(cpu); } @@ -1333,11 +1338,9 @@ static int tcg_cpu_exec(CPUState *cpu) #ifdef CONFIG_PROFILER ti = profile_getclock(); #endif - qemu_mutex_unlock_iothread(); cpu_exec_start(cpu); ret = cpu_exec(cpu); cpu_exec_end(cpu); - qemu_mutex_lock_iothread(); #ifdef CONFIG_PROFILER tcg_time += profile_getclock() - ti; #endif @@ -1397,6 +1400,7 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg) qemu_wait_io_event_common(cpu); } } + qemu_mutex_unlock_iothread(); start_tcg_kick_timer(); @@ -1406,6 +1410,8 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg) cpu->exit_request = 1; while (1) { + qemu_mutex_lock_iothread(); + /* Account partial waits to QEMU_CLOCK_VIRTUAL. */ qemu_account_warp_timer(); @@ -1414,6 +1420,8 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg) */ handle_icount_deadline(); + qemu_mutex_unlock_iothread(); + if (!cpu) { cpu = first_cpu; } @@ -1439,9 +1447,7 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg) cpu_handle_guest_debug(cpu); break; } else if (r == EXCP_ATOMIC) { - qemu_mutex_unlock_iothread(); cpu_exec_step_atomic(cpu); - qemu_mutex_lock_iothread(); break; } } else if (cpu->stop) { @@ -1482,6 +1488,7 @@ static void *qemu_hax_cpu_thread_fn(void *arg) current_cpu = cpu; hax_init_vcpu(cpu); + qemu_mutex_unlock_iothread(); qemu_cond_signal(&qemu_cpu_cond); while (1) { @@ -1518,8 +1525,9 @@ static void *qemu_hvf_cpu_thread_fn(void *arg) hvf_init_vcpu(cpu); - /* signal CPU creation */ cpu->created = true; + qemu_mutex_unlock_iothread(); + /* signal CPU creation */ qemu_cond_signal(&qemu_cpu_cond); do { @@ -1535,7 +1543,6 @@ static void *qemu_hvf_cpu_thread_fn(void *arg) hvf_vcpu_destroy(cpu); cpu->created = false; qemu_cond_signal(&qemu_cpu_cond); - qemu_mutex_unlock_iothread(); return NULL; } @@ -1568,6 +1575,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) cpu->created = true; cpu->can_do_io = 1; current_cpu = cpu; + qemu_mutex_unlock_iothread(); qemu_cond_signal(&qemu_cpu_cond); /* process any pending work */ @@ -1592,9 +1600,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) g_assert(cpu->halted); break; case EXCP_ATOMIC: - qemu_mutex_unlock_iothread(); cpu_exec_step_atomic(cpu); - qemu_mutex_lock_iothread(); default: /* Ignore everything else? */ break; @@ -1603,7 +1609,6 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) qemu_tcg_destroy_vcpu(cpu); cpu->created = false; qemu_cond_signal(&qemu_cpu_cond); - qemu_mutex_unlock_iothread(); return NULL; }