From patchwork Fri Jan 19 08:44:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Pavel Dovgalyuk X-Patchwork-Id: 125118 Delivered-To: patch@linaro.org Received: by 10.46.66.141 with SMTP id h13csp194001ljf; Fri, 19 Jan 2018 00:49:12 -0800 (PST) X-Google-Smtp-Source: ACJfBovPOBWnW+YCXFXM0RIBXmd1yupWLbUUQFYnE/Yebu3iGENZa8byO/VrQSjB2HoyZs5zb1up X-Received: by 10.13.197.193 with SMTP id h184mr8863690ywd.514.1516351751906; Fri, 19 Jan 2018 00:49:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516351751; cv=none; d=google.com; s=arc-20160816; b=NkoG03PQgn9s4O14GymszIYgBG/3zX71cRCJVVEIc7Ij6t3ufU6pJZPevwVNxRwIYw +SyrTXJ1HVmqTTy7C0+Ikrm1X1WUHcK5aGP/pHZdx7jri3slHZnTrjIV/m0iLH1RFGM3 EvZb+5feluhVA8IuSBqx0FRn054nGFO7TDKXuUyMSjAsEog52+mkfh6Plm9gjSiR3B6M JVx3f4qj0TVdeHA1B05aVTF9D3zAN8IlKp0mJXr1EVtRmpMgE/oe31z5Vw6IRbNSyQhG OwHmPiEByKwkwItbZOJsKq6KHpw2Vu9iWMObmAcL5R5PvS1SgLdW55u8pvbGNoRhVfkL HSlw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject :content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:to:from:arc-authentication-results; bh=nQX5XzsuFJgjFYfEiqavm3hARnT5AFVKPdF4TWYujVA=; b=vnUVZYekLu4IrcoCkex2Abesz6IC0rbhpjRPCV0iyrSD0haNcWjEeVN4f+M7aM/LW4 epwGVZMkqQujPE02UwMOL0z8YeM66che/udvaLMOWyl4ObRTEAunRffFEToqGjnLa9jM ASiv9rLTsTxAG5ehLMCJ3QOEyKpXBpq7HYzlf7tvFoB8ZrkeELgkyZMwR1/qCD/8VwWP CJOzWXlb+VKkA/mvJF7nAmP2grGO76PKEB00TE4v0o+YSANBx5bMmBLm/ZhitYlseSvo VQCJTsO+HFkof6dIW9ByNgtPIuAAgDGwA4hm/DM9EG1lkvvMyMupyJERP0h5kJ/I11qF PzZA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom=qemu-devel-bounces+patch=linaro.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id w12si2107814ywg.372.2018.01.19.00.49.11 for (version=TLS1 cipher=AES128-SHA bits=128/128); Fri, 19 Jan 2018 00:49:11 -0800 (PST) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom=qemu-devel-bounces+patch=linaro.org@nongnu.org Received: from localhost ([::1]:35355 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ecSMJ-0004AZ-71 for patch@linaro.org; Fri, 19 Jan 2018 03:49:11 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:54180) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ecSHQ-0001Jc-Ff for qemu-devel@nongnu.org; Fri, 19 Jan 2018 03:44:09 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ecSHP-0005Pn-BW for qemu-devel@nongnu.org; Fri, 19 Jan 2018 03:44:08 -0500 Received: from mail.ispras.ru ([83.149.199.45]:53694) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ecSHO-0005PM-UG for qemu-devel@nongnu.org; Fri, 19 Jan 2018 03:44:07 -0500 Received: from [127.0.1.1] (unknown [85.142.117.226]) by mail.ispras.ru (Postfix) with ESMTPSA id 001A154006A; Fri, 19 Jan 2018 11:44:04 +0300 (MSK) From: Pavel Dovgalyuk To: qemu-devel@nongnu.org Date: Fri, 19 Jan 2018 11:44:09 +0300 Message-ID: <20180119084409.7100.23132.stgit@pasha-VirtualBox> In-Reply-To: <20180119084235.7100.98318.stgit@pasha-VirtualBox> References: <20180119084235.7100.98318.stgit@pasha-VirtualBox> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [fuzzy] X-Received-From: 83.149.199.45 Subject: [Qemu-devel] [RFC PATCH v4 12/23] cpus: push BQL lock to qemu_*_wait_io_event X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, peter.maydell@linaro.org, pavel.dovgaluk@ispras.ru, mst@redhat.com, jasowang@redhat.com, quintela@redhat.com, zuban32s@gmail.com, maria.klimushenkova@ispras.ru, dovgaluk@ispras.ru, kraxel@redhat.com, boost.lists@gmail.com, pbonzini@redhat.com, alex.bennee@linaro.org Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: Alex Bennée We only really need to grab the lock for initial setup (so we don't race with the thread-spawning thread). After that we can drop the lock for the whole main loop and only grab it for waiting for IO events. There is a slight wrinkle for the round-robin TCG thread as we also expire timers which needs to be done under BQL as they are in the main-loop. This is stage one of reducing the lock impact as we can drop the requirement of implicit BQL for async work and only grab the lock when we need to sleep on the cpu->halt_cond. Signed-off-by: Alex Bennée Tested-by: Pavel Dovgalyuk --- accel/kvm/kvm-all.c | 4 ---- cpus.c | 22 +++++++++++++++------- dtc | 2 +- target/i386/hax-all.c | 2 -- 4 files changed, 16 insertions(+), 14 deletions(-) diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index f290f48..8d1d2c4 100644 --- a/accel/kvm/kvm-all.c +++ b/accel/kvm/kvm-all.c @@ -1857,9 +1857,7 @@ int kvm_cpu_exec(CPUState *cpu) return EXCP_HLT; } - qemu_mutex_unlock_iothread(); cpu_exec_start(cpu); - do { MemTxAttrs attrs; @@ -1989,8 +1987,6 @@ int kvm_cpu_exec(CPUState *cpu) } while (ret == 0); cpu_exec_end(cpu); - qemu_mutex_lock_iothread(); - if (ret < 0) { cpu_dump_state(cpu, stderr, fprintf, CPU_DUMP_CODE); vm_stop(RUN_STATE_INTERNAL_ERROR); diff --git a/cpus.c b/cpus.c index 7b6ce74..ca86d9f 100644 --- a/cpus.c +++ b/cpus.c @@ -1150,10 +1150,14 @@ static void qemu_tcg_rr_wait_io_event(CPUState *cpu) start_tcg_kick_timer(); qemu_wait_io_event_common(cpu); + + qemu_mutex_unlock_iothread(); } static void qemu_wait_io_event(CPUState *cpu) { + qemu_mutex_lock_iothread(); + while (cpu_thread_is_idle(cpu)) { qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex); } @@ -1190,6 +1194,8 @@ static void *qemu_kvm_cpu_thread_fn(void *arg) /* signal CPU creation */ cpu->created = true; + qemu_mutex_unlock_iothread(); + qemu_cond_signal(&qemu_cpu_cond); do { @@ -1232,10 +1238,10 @@ static void *qemu_dummy_cpu_thread_fn(void *arg) /* signal CPU creation */ cpu->created = true; + qemu_mutex_unlock_iothread(); qemu_cond_signal(&qemu_cpu_cond); while (1) { - qemu_mutex_unlock_iothread(); do { int sig; r = sigwait(&waitset, &sig); @@ -1246,6 +1252,7 @@ static void *qemu_dummy_cpu_thread_fn(void *arg) } qemu_mutex_lock_iothread(); qemu_wait_io_event(cpu); + qemu_mutex_unlock_iothread(); } return NULL; @@ -1334,11 +1341,9 @@ static int tcg_cpu_exec(CPUState *cpu) #ifdef CONFIG_PROFILER ti = profile_getclock(); #endif - qemu_mutex_unlock_iothread(); cpu_exec_start(cpu); ret = cpu_exec(cpu); cpu_exec_end(cpu); - qemu_mutex_lock_iothread(); #ifdef CONFIG_PROFILER tcg_time += profile_getclock() - ti; #endif @@ -1398,6 +1403,7 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg) qemu_wait_io_event_common(cpu); } } + qemu_mutex_unlock_iothread(); start_tcg_kick_timer(); @@ -1407,6 +1413,8 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg) cpu->exit_request = 1; while (1) { + qemu_mutex_lock_iothread(); + /* Account partial waits to QEMU_CLOCK_VIRTUAL. */ qemu_account_warp_timer(); @@ -1415,6 +1423,8 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg) */ handle_icount_deadline(); + qemu_mutex_unlock_iothread(); + if (!cpu) { cpu = first_cpu; } @@ -1440,9 +1450,7 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg) cpu_handle_guest_debug(cpu); break; } else if (r == EXCP_ATOMIC) { - qemu_mutex_unlock_iothread(); cpu_exec_step_atomic(cpu); - qemu_mutex_lock_iothread(); break; } } else if (cpu->stop) { @@ -1483,6 +1491,7 @@ static void *qemu_hax_cpu_thread_fn(void *arg) current_cpu = cpu; hax_init_vcpu(cpu); + qemu_mutex_unlock_iothread(); qemu_cond_signal(&qemu_cpu_cond); while (1) { @@ -1569,6 +1578,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) cpu->created = true; cpu->can_do_io = 1; current_cpu = cpu; + qemu_mutex_unlock_iothread(); qemu_cond_signal(&qemu_cpu_cond); /* process any pending work */ @@ -1593,9 +1603,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) g_assert(cpu->halted); break; case EXCP_ATOMIC: - qemu_mutex_unlock_iothread(); cpu_exec_step_atomic(cpu); - qemu_mutex_lock_iothread(); default: /* Ignore everything else? */ break; diff --git a/dtc b/dtc index e543880..558cd81 160000 --- a/dtc +++ b/dtc @@ -1 +1 @@ -Subproject commit e54388015af1fb4bf04d0bca99caba1074d9cc42 +Subproject commit 558cd81bdd432769b59bff01240c44f82cfb1a9d diff --git a/target/i386/hax-all.c b/target/i386/hax-all.c index 934ec4a..54b1fc7 100644 --- a/target/i386/hax-all.c +++ b/target/i386/hax-all.c @@ -513,11 +513,9 @@ static int hax_vcpu_hax_exec(CPUArchState *env) hax_vcpu_interrupt(env); - qemu_mutex_unlock_iothread(); cpu_exec_start(cpu); hax_ret = hax_vcpu_run(vcpu); cpu_exec_end(cpu); - qemu_mutex_lock_iothread(); /* Simply continue the vcpu_run if system call interrupted */ if (hax_ret == -EINTR || hax_ret == -EAGAIN) {