From patchwork Tue Apr 11 10:50:29 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 97235 Delivered-To: patch@linaro.org Received: by 10.140.89.233 with SMTP id v96csp1748296qgd; Tue, 11 Apr 2017 03:52:15 -0700 (PDT) X-Received: by 10.237.48.66 with SMTP id 60mr64313528qte.25.1491907935625; Tue, 11 Apr 2017 03:52:15 -0700 (PDT) Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id r187si16033811qkd.247.2017.04.11.03.52.15 for (version=TLS1 cipher=AES128-SHA bits=128/128); Tue, 11 Apr 2017 03:52:15 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom=qemu-devel-bounces+patch=linaro.org@nongnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:38439 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cxtPB-0004PB-2Y for patch@linaro.org; Tue, 11 Apr 2017 06:52:13 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:43742) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cxtNT-0003Y9-D1 for qemu-devel@nongnu.org; Tue, 11 Apr 2017 06:50:30 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cxtNQ-0004SG-Ur for qemu-devel@nongnu.org; Tue, 11 Apr 2017 06:50:27 -0400 Received: from mail-wm0-x232.google.com ([2a00:1450:400c:c09::232]:35882) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1cxtNQ-0004S0-Lq for qemu-devel@nongnu.org; Tue, 11 Apr 2017 06:50:24 -0400 Received: by mail-wm0-x232.google.com with SMTP id o81so59948034wmb.1 for ; Tue, 11 Apr 2017 03:50:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3+ItvCVg4Q+5C479auMMsvRRi5WUusJmI6gBU6wcWo0=; b=XugZP31tfuhpHsFu4Nom05sDe+BH9hkq82+aXZjd0azOfVMFIntdAhPWdluT/4h1Ln YGu0yfO4vyW6qpMB1blsPNpbBXOYc/JqZRIDNTmerPhJiJEa+Qgm6E7//MIix8vIcKab 7T5nBLBeEDQNhIfgqZHrsA2/SF1y1uKgKN2es= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3+ItvCVg4Q+5C479auMMsvRRi5WUusJmI6gBU6wcWo0=; b=qYNeGq4puJH367OwQI2yrKeiqZQSkYmXFgpjH2/AnZLPkjunokzivaUWN4ShmcM2Ka 5YsPKlM1xOiliA5oxZr88Iym/JBM4em+2TgNK2S1bDemgE3yIfw2O+AvuvIqw5faWRr3 ezjBm16O7deu4I7fFyc8zSQGUec+uNeJ66GN90MPX6uJpL3IMmTS0MSnsBU71zwL4BPl vrilGCnPnWJar7RAizlJKNMbfSUvID8ZSpDXd3fBQWci3Cqgpsrkfg0taADyfGB15Bvp DGgnNog6WZjbvRlZ8cPt3tQcmGLIlqI8QQxJ3UxxxIDEAR2vcE+SrTe/UarLZM4gjyKe iXcQ== X-Gm-Message-State: AN3rC/5siujI9+x00+K4Wcw53c7GBnY5Oge8jdNd9rhNQRDieRpcK16p 1GRpX267FQHSmuwc X-Received: by 10.28.93.65 with SMTP id r62mr13612546wmb.77.1491907823425; Tue, 11 Apr 2017 03:50:23 -0700 (PDT) Received: from zen.linaro.local ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id m83sm1991589wmc.7.2017.04.11.03.50.21 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 11 Apr 2017 03:50:21 -0700 (PDT) Received: from zen.linaroharston (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id 2941A3E1104; Tue, 11 Apr 2017 11:50:31 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: bobby.prani@gmail.com, rth@twiddle.net, stefanha@redhat.com Date: Tue, 11 Apr 2017 11:50:29 +0100 Message-Id: <20170411105031.28904-2-alex.bennee@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170411105031.28904-1-alex.bennee@linaro.org> References: <20170411105031.28904-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::232 Subject: [Qemu-devel] [PATCH v1 1/3] cputlb: fix and enhance TLB statistics X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-devel@nongnu.org, Peter Crosthwaite Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" First this fixes the fact that statistics where only being updated if the build was built with DEBUG_TLB. Also it now counts the flush cases based on the degree of synchronisation required to run. This is a more useful discriminator for seeing what is slowing down the translation. Signed-off-by: Alex Bennée --- cputlb.c | 27 ++++++++++++++++++++++++--- include/exec/cputlb.h | 4 +++- translate-all.c | 4 +++- 3 files changed, 30 insertions(+), 5 deletions(-) -- 2.11.0 diff --git a/cputlb.c b/cputlb.c index f5d056cc08..224863ed76 100644 --- a/cputlb.c +++ b/cputlb.c @@ -92,8 +92,13 @@ static void flush_all_helper(CPUState *src, run_on_cpu_func fn, } } -/* statistics */ -int tlb_flush_count; +/* Useful statistics - in rough order of expensiveness to the whole + * simulation. Synced flushes are the most expensive as all vCPUs need + * to be paused for the flush. + */ +int tlb_self_flush_count; /* from vCPU context */ +int tlb_async_flush_count; /* Deferred flush */ +int tlb_synced_flush_count; /* Synced flush, all vCPUs halted */ /* This is OK because CPU architectures generally permit an * implementation to drop entries from the TLB at any time, so @@ -112,7 +117,6 @@ static void tlb_flush_nocheck(CPUState *cpu) } assert_cpu_is_self(cpu); - tlb_debug("(count: %d)\n", tlb_flush_count++); tb_lock(); @@ -131,6 +135,7 @@ static void tlb_flush_nocheck(CPUState *cpu) static void tlb_flush_global_async_work(CPUState *cpu, run_on_cpu_data data) { + atomic_inc(&tlb_async_flush_count); tlb_flush_nocheck(cpu); } @@ -143,6 +148,7 @@ void tlb_flush(CPUState *cpu) RUN_ON_CPU_NULL); } } else { + atomic_inc(&tlb_self_flush_count); tlb_flush_nocheck(cpu); } } @@ -157,6 +163,7 @@ void tlb_flush_all_cpus(CPUState *src_cpu) void tlb_flush_all_cpus_synced(CPUState *src_cpu) { const run_on_cpu_func fn = tlb_flush_global_async_work; + atomic_inc(&tlb_synced_flush_count); flush_all_helper(src_cpu, fn, RUN_ON_CPU_NULL); async_safe_run_on_cpu(src_cpu, fn, RUN_ON_CPU_NULL); } @@ -168,6 +175,7 @@ static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data data) int mmu_idx; assert_cpu_is_self(cpu); + atomic_inc(&tlb_async_flush_count); tb_lock(); @@ -206,6 +214,7 @@ void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap) RUN_ON_CPU_HOST_INT(pending_flushes)); } } else { + atomic_inc(&tlb_self_flush_count); tlb_flush_by_mmuidx_async_work(cpu, RUN_ON_CPU_HOST_INT(idxmap)); } @@ -219,6 +228,7 @@ void tlb_flush_by_mmuidx_all_cpus(CPUState *src_cpu, uint16_t idxmap) flush_all_helper(src_cpu, fn, RUN_ON_CPU_HOST_INT(idxmap)); fn(src_cpu, RUN_ON_CPU_HOST_INT(idxmap)); + atomic_inc(&tlb_self_flush_count); } void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *src_cpu, @@ -230,6 +240,7 @@ void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *src_cpu, flush_all_helper(src_cpu, fn, RUN_ON_CPU_HOST_INT(idxmap)); async_safe_run_on_cpu(src_cpu, fn, RUN_ON_CPU_HOST_INT(idxmap)); + atomic_inc(&tlb_synced_flush_count); } @@ -282,6 +293,8 @@ static void tlb_flush_page_async_work(CPUState *cpu, run_on_cpu_data data) } tb_flush_jmp_cache(cpu, addr); + + atomic_inc(&tlb_async_flush_count); } void tlb_flush_page(CPUState *cpu, target_ulong addr) @@ -293,6 +306,7 @@ void tlb_flush_page(CPUState *cpu, target_ulong addr) RUN_ON_CPU_TARGET_PTR(addr)); } else { tlb_flush_page_async_work(cpu, RUN_ON_CPU_TARGET_PTR(addr)); + atomic_inc(&tlb_self_flush_count); } } @@ -329,6 +343,7 @@ static void tlb_flush_page_by_mmuidx_async_work(CPUState *cpu, } tb_flush_jmp_cache(cpu, addr); + atomic_inc(&tlb_async_flush_count); } static void tlb_check_page_and_flush_by_mmuidx_async_work(CPUState *cpu, @@ -351,6 +366,7 @@ static void tlb_check_page_and_flush_by_mmuidx_async_work(CPUState *cpu, RUN_ON_CPU_HOST_INT(mmu_idx_bitmap)); } else { tlb_flush_page_by_mmuidx_async_work(cpu, data); + atomic_inc(&tlb_self_flush_count); } } @@ -370,6 +386,7 @@ void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, uint16_t idxmap) } else { tlb_check_page_and_flush_by_mmuidx_async_work( cpu, RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx)); + atomic_inc(&tlb_self_flush_count); } } @@ -387,6 +404,7 @@ void tlb_flush_page_by_mmuidx_all_cpus(CPUState *src_cpu, target_ulong addr, flush_all_helper(src_cpu, fn, RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx)); fn(src_cpu, RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx)); + atomic_inc(&tlb_self_flush_count); } void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUState *src_cpu, @@ -404,6 +422,7 @@ void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUState *src_cpu, flush_all_helper(src_cpu, fn, RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx)); async_safe_run_on_cpu(src_cpu, fn, RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx)); + atomic_inc(&tlb_synced_flush_count); } void tlb_flush_page_all_cpus(CPUState *src, target_ulong addr) @@ -412,6 +431,7 @@ void tlb_flush_page_all_cpus(CPUState *src, target_ulong addr) flush_all_helper(src, fn, RUN_ON_CPU_TARGET_PTR(addr)); fn(src, RUN_ON_CPU_TARGET_PTR(addr)); + atomic_inc(&tlb_self_flush_count); } void tlb_flush_page_all_cpus_synced(CPUState *src, @@ -421,6 +441,7 @@ void tlb_flush_page_all_cpus_synced(CPUState *src, flush_all_helper(src, fn, RUN_ON_CPU_TARGET_PTR(addr)); async_safe_run_on_cpu(src, fn, RUN_ON_CPU_TARGET_PTR(addr)); + atomic_inc(&tlb_synced_flush_count); } /* update the TLBs so that writes to code in the virtual page 'addr' diff --git a/include/exec/cputlb.h b/include/exec/cputlb.h index 3f941783c5..5085384014 100644 --- a/include/exec/cputlb.h +++ b/include/exec/cputlb.h @@ -23,7 +23,9 @@ /* cputlb.c */ void tlb_protect_code(ram_addr_t ram_addr); void tlb_unprotect_code(ram_addr_t ram_addr); -extern int tlb_flush_count; +extern int tlb_self_flush_count; +extern int tlb_async_flush_count; +extern int tlb_synced_flush_count; #endif #endif diff --git a/translate-all.c b/translate-all.c index b3ee876526..0578ae6123 100644 --- a/translate-all.c +++ b/translate-all.c @@ -1927,7 +1927,9 @@ void dump_exec_info(FILE *f, fprintf_function cpu_fprintf) atomic_read(&tcg_ctx.tb_ctx.tb_flush_count)); cpu_fprintf(f, "TB invalidate count %d\n", tcg_ctx.tb_ctx.tb_phys_invalidate_count); - cpu_fprintf(f, "TLB flush count %d\n", tlb_flush_count); + cpu_fprintf(f, "TLB flush counts self:%d async:%d synced:%d\n", + tlb_self_flush_count, tlb_async_flush_count, + tlb_synced_flush_count); tcg_dump_info(f, cpu_fprintf); tb_unlock();