From patchwork Tue Aug 2 17:36:39 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 73176 Delivered-To: patch@linaro.org Received: by 10.140.29.52 with SMTP id a49csp290661qga; Tue, 2 Aug 2016 10:49:41 -0700 (PDT) X-Received: by 10.200.55.137 with SMTP id d9mr101097218qtc.46.1470160181398; Tue, 02 Aug 2016 10:49:41 -0700 (PDT) Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id s68si2166605qkb.1.2016.08.02.10.49.41 for (version=TLS1 cipher=AES128-SHA bits=128/128); Tue, 02 Aug 2016 10:49:41 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom=qemu-devel-bounces+patch=linaro.org@nongnu.org; dmarc=fail (p=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:58057 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bUdoy-0002VY-Of for patch@linaro.org; Tue, 02 Aug 2016 13:49:40 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41434) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bUdcY-0008Q6-SI for qemu-devel@nongnu.org; Tue, 02 Aug 2016 13:36:52 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bUdcU-00074Q-JI for qemu-devel@nongnu.org; Tue, 02 Aug 2016 13:36:49 -0400 Received: from mail-wm0-x233.google.com ([2a00:1450:400c:c09::233]:38792) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bUdcU-00074F-9J for qemu-devel@nongnu.org; Tue, 02 Aug 2016 13:36:46 -0400 Received: by mail-wm0-x233.google.com with SMTP id o80so301064982wme.1 for ; Tue, 02 Aug 2016 10:36:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zkzsT6M4gOfXI9+ovh5tX+QeDZ3Wm+X5PO1PxSsDaRo=; b=e/4KJaPNbsvkbgQXahYcNwtvjMG9F4OVsHsoy6vr1umIZM7NfnUJ1fQvVKucMBET8r f7xnhH9xvALjyhfih9ksSFsyiNPjtUmfRCp35kSXPB+Ay57Wi+6uTSu1GbeelA0c2HT4 GsJnvyiL/exwIa7mamsZBIAJ6/HJU8a/AARYw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zkzsT6M4gOfXI9+ovh5tX+QeDZ3Wm+X5PO1PxSsDaRo=; b=g1J2JMU4+YFPq9q+9jknF+s3bPU6lbllW1KRjJK3sQNszK/JqNi90AjbbRhLnAC/KN OX1ho3VBsX570rpxsNABFF5Vm4IeMMyv5YlGES4EYrXJ1GmPRU/RIYiwrxxH8DahLTQ7 j1jM79Peh9x3caEWcJghr3bJDEGZfIoHcrAtrS/Ejj4M0LppXylVbob3nltFFvjitX0l SWA6l8aL63Jg6dfGjUxJOskgrsv5lND5p5/5zHFEJ4XmsHyVx4XYDBvJoepXAgZGxvkP x5yX8oxpFr3Wt5F5qUNAOTfcYQa9Pp4hDaeihm/W+XZD8rptlUwAI2TyX8aa0mnG2zgq F+7w== X-Gm-Message-State: AEkoousgMP5Y/F4/30fi8AVWQeiYOqDeKVU1Zu1HP2gI5tL1+HgqjJK0rcrXQRznRCyauyPb X-Received: by 10.194.58.112 with SMTP id p16mr58540617wjq.24.1470159405306; Tue, 02 Aug 2016 10:36:45 -0700 (PDT) Received: from zen.linaro.local ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id f3sm3670582wjh.2.2016.08.02.10.36.44 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 02 Aug 2016 10:36:44 -0700 (PDT) Received: from zen.linaroharston (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id 7E1A43E0402; Tue, 2 Aug 2016 18:36:53 +0100 (BST) From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: mttcg@listserver.greensocs.com, qemu-devel@nongnu.org, fred.konrad@greensocs.com, a.rigo@virtualopensystems.com, serge.fdrv@gmail.com, cota@braap.org, bobby.prani@gmail.com Date: Tue, 2 Aug 2016 18:36:39 +0100 Message-Id: <1470159399-30437-1-git-send-email-alex.bennee@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1470158864-17651-14-git-send-email-alex.bennee@linaro.org> References: <1470158864-17651-14-git-send-email-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::233 Subject: [Qemu-devel] [PATCH v5 13/13] cpu-exec: replace cpu->queued_work with GArray X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, Peter Crosthwaite , claudio.fontana@huawei.com, mark.burton@greensocs.com, jan.kiszka@siemens.com, pbonzini@redhat.com, =?UTF-8?q?Alex=20Benn=C3=A9e?= , rth@twiddle.net Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Under times of high memory stress the additional small mallocs by a linked list are source of potential memory fragmentation. As we have worked hard to avoid mallocs elsewhere when queuing work we might as well do the same for the list. We convert the lists to a auto-resizeing GArray which will re-size in steps of powers of 2. In theory the GArray could be mostly lockless but at the moment we keep the locking scheme as before. However another advantage of having an array means we can allocate a new one and process the old one without bouncing the lock. This will also be more cache friendly as we don't need to bounce between cache lines as we work through the saved data. Signed-off-by: Alex Bennée --- cpu-exec-common.c | 107 ++++++++++++++++++++++++++++-------------------------- cpus.c | 2 +- include/qom/cpu.h | 6 +-- 3 files changed, 59 insertions(+), 56 deletions(-) -- 2.7.4 diff --git a/cpu-exec-common.c b/cpu-exec-common.c index 6d5da15..0bec55a 100644 --- a/cpu-exec-common.c +++ b/cpu-exec-common.c @@ -113,17 +113,17 @@ void wait_safe_cpu_work(void) static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi) { qemu_mutex_lock(&cpu->work_mutex); - if (cpu->queued_work_first == NULL) { - cpu->queued_work_first = wi; - } else { - cpu->queued_work_last->next = wi; + + if (!cpu->queued_work) { + cpu->queued_work = g_array_sized_new(true, true, + sizeof(struct qemu_work_item), 16); } - cpu->queued_work_last = wi; - wi->next = NULL; - wi->done = false; + + g_array_append_val(cpu->queued_work, *wi); if (wi->safe) { atomic_inc(&safe_work_pending); } + qemu_mutex_unlock(&cpu->work_mutex); if (!wi->safe) { @@ -138,6 +138,7 @@ static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi) void run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data) { struct qemu_work_item wi; + bool done = false; if (qemu_cpu_is_self(cpu)) { func(cpu, data); @@ -146,11 +147,11 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data) wi.func = func; wi.data = data; - wi.free = false; wi.safe = false; + wi.done = &done; queue_work_on_cpu(cpu, &wi); - while (!atomic_mb_read(&wi.done)) { + while (!atomic_mb_read(&done)) { CPUState *self_cpu = current_cpu; qemu_cond_wait(&qemu_work_cond, qemu_get_cpu_work_mutex()); @@ -160,70 +161,74 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data) void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data) { - struct qemu_work_item *wi; + struct qemu_work_item wi; if (qemu_cpu_is_self(cpu)) { func(cpu, data); return; } - wi = g_malloc0(sizeof(struct qemu_work_item)); - wi->func = func; - wi->data = data; - wi->free = true; - wi->safe = false; + wi.func = func; + wi.data = data; + wi.safe = false; + wi.done = NULL; - queue_work_on_cpu(cpu, wi); + queue_work_on_cpu(cpu, &wi); } void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data) { - struct qemu_work_item *wi; + struct qemu_work_item wi; - wi = g_malloc0(sizeof(struct qemu_work_item)); - wi->func = func; - wi->data = data; - wi->free = true; - wi->safe = true; + wi.func = func; + wi.data = data; + wi.safe = true; + wi.done = NULL; - queue_work_on_cpu(cpu, wi); + queue_work_on_cpu(cpu, &wi); } void process_queued_cpu_work(CPUState *cpu) { struct qemu_work_item *wi; - - if (cpu->queued_work_first == NULL) { - return; - } + GArray *work_list = NULL; + int i; qemu_mutex_lock(&cpu->work_mutex); - while (cpu->queued_work_first != NULL) { - wi = cpu->queued_work_first; - cpu->queued_work_first = wi->next; - if (!cpu->queued_work_first) { - cpu->queued_work_last = NULL; - } - if (wi->safe) { - while (tcg_pending_threads) { - qemu_cond_wait(&qemu_exclusive_cond, - qemu_get_cpu_work_mutex()); + + work_list = cpu->queued_work; + cpu->queued_work = NULL; + + qemu_mutex_unlock(&cpu->work_mutex); + + if (work_list) { + + g_assert(work_list->len > 0); + + for (i = 0; i < work_list->len; i++) { + wi = &g_array_index(work_list, struct qemu_work_item, i); + + if (wi->safe) { + while (tcg_pending_threads) { + qemu_cond_wait(&qemu_exclusive_cond, + qemu_get_cpu_work_mutex()); + } } - } - qemu_mutex_unlock(&cpu->work_mutex); - wi->func(cpu, wi->data); - qemu_mutex_lock(&cpu->work_mutex); - if (wi->safe) { - if (!atomic_dec_fetch(&safe_work_pending)) { - qemu_cond_broadcast(&qemu_safe_work_cond); + + wi->func(cpu, wi->data); + + if (wi->safe) { + if (!atomic_dec_fetch(&safe_work_pending)) { + qemu_cond_broadcast(&qemu_safe_work_cond); + } + } + + if (wi->done) { + atomic_mb_set(wi->done, true); } } - if (wi->free) { - g_free(wi); - } else { - atomic_mb_set(&wi->done, true); - } + + qemu_cond_broadcast(&qemu_work_cond); + g_array_free(work_list, true); } - qemu_mutex_unlock(&cpu->work_mutex); - qemu_cond_broadcast(&qemu_work_cond); } diff --git a/cpus.c b/cpus.c index b712204..1ea60e4 100644 --- a/cpus.c +++ b/cpus.c @@ -88,7 +88,7 @@ bool cpu_is_stopped(CPUState *cpu) static bool cpu_thread_is_idle(CPUState *cpu) { - if (cpu->stop || cpu->queued_work_first) { + if (cpu->stop || cpu->queued_work) { return false; } if (cpu_is_stopped(cpu)) { diff --git a/include/qom/cpu.h b/include/qom/cpu.h index dee5ad0..060a473 100644 --- a/include/qom/cpu.h +++ b/include/qom/cpu.h @@ -235,11 +235,9 @@ struct kvm_run; typedef void (*run_on_cpu_func)(CPUState *cpu, void *data); struct qemu_work_item { - struct qemu_work_item *next; run_on_cpu_func func; void *data; - int done; - bool free; + bool *done; bool safe; }; @@ -318,7 +316,7 @@ struct CPUState { sigjmp_buf jmp_env; QemuMutex work_mutex; - struct qemu_work_item *queued_work_first, *queued_work_last; + GArray *queued_work; CPUAddressSpace *cpu_ases; int num_ases;