From patchwork Tue Oct 23 07:02:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 149420 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp344795ljp; Tue, 23 Oct 2018 00:07:24 -0700 (PDT) X-Google-Smtp-Source: AJdET5eIL+nsVtaOh8Z0rGblbSiJibRGbqGqUkxpUjnS86KU2GC6MRgbiwMWoj6oR4jCSCodf2ZO X-Received: by 2002:a37:1509:: with SMTP id f9mr1052953qkh.271.1540278444629; Tue, 23 Oct 2018 00:07:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540278444; cv=none; d=google.com; s=arc-20160816; b=UxfBietTiTNEQn1ZUAPlSTf8KcKi+nqzB6oSqcMxZgruaD5HysK7jQ51IXBESMk2Xs 8FVCl2XtHdZU/5BnR7MkYJSQgz2Ffq6ooZQiUvSZD+/jdYt2UnBAM9JqM7M9ZjaFr9dk /B5ju8zFa2xA+nTCD91jl43X+pyW7EyMmT5fWdEGQNNsmNcjo7klGap+wWRBktxopwK8 wCBPW/ek51ELCJ9RexiauRjXcWYEbadw1gtJd09GhO0iQ4pV4pgCcuFPNuy5Js5pM8wS eei00+WJ9WTgRgHbv/9PMFjlqa3bV2M/7xsK2W9cY2ScuP7TbXJBXTSsC41VXgsO0WB8 UI3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:to:from:dkim-signature; bh=kk1qq57GJPL/Lh6dxNjkZlKloxkDADekJnNQMyq/28E=; b=P+T+H3KMNHB9Xh5iYOcuBbeIWJ4xxOKOHm6X2wQBSmmV1zwyDvQvRfeXh+HKzNRkGY v8Lekpwe55XEkEtcyeEXnze3a3C1pkjhm3+HB2AkcPsyWkdqLlDXKm0fJ7ZEk56YNCfy MoiBMv0zZBOjeD7FWotQ3E9c22CFQaRtOKXTnny9iyQLC9s5rBWhsLdyzDxyrLNwS9yh V8BaNOVQ/2Hkdyb92PBE9GLKccjhX9GdHVM6BNeBhUopOZCS3tqlIQU5cxqiqSRqLjtX 5dzkI3DRMneS0ijlCIR69O3mCOA3fzA5QQ0t3+F0xV+XT9BInxiC5amBRvKOr1p9EkZc aCpw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=CbFaDFGH; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id d4-v6si145342qti.388.2018.10.23.00.07.24 for (version=TLS1 cipher=AES128-SHA bits=128/128); Tue, 23 Oct 2018 00:07:24 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=CbFaDFGH; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:38469 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gEqmh-0008WS-Cx for patch@linaro.org; Tue, 23 Oct 2018 03:07:23 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:59895) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gEqm2-0008Q6-8Y for qemu-devel@nongnu.org; Tue, 23 Oct 2018 03:06:50 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gEqie-0003uc-DZ for qemu-devel@nongnu.org; Tue, 23 Oct 2018 03:03:20 -0400 Received: from mail-wr1-x444.google.com ([2a00:1450:4864:20::444]:46004) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1gEqid-0003p6-Ub for qemu-devel@nongnu.org; Tue, 23 Oct 2018 03:03:12 -0400 Received: by mail-wr1-x444.google.com with SMTP id f17-v6so341983wrs.12 for ; Tue, 23 Oct 2018 00:03:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=kk1qq57GJPL/Lh6dxNjkZlKloxkDADekJnNQMyq/28E=; b=CbFaDFGHQSWoeHOG56m9DzFmBDZrL/eDFxPYJ2KDkWKclUPAMTz8OWG/51fiTVehNf +U0QfBm3EX2zNcmGQR2od4j4zCdDw/Om+ztaCcJ7QpbyA+d3Kz0dBnUnvJUNmnoYOdYM A6xih7Tb0tskFyLuIJ+jnLqzbGkHscHJKah7I= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=kk1qq57GJPL/Lh6dxNjkZlKloxkDADekJnNQMyq/28E=; b=M6+s520rHLLizqwU+HO3jJKS8uQcQG6hF9ydtVpEFFviDhssjAFhTyuxRw85h8gsHu eLk8Xx/XutNji0C2k2xqXtVTGzu1Vq9hYjzpJbka3ltenVlqgfgrz7W90ASbWU8YfYNs YpijcZLq44WETsPGG6AnNPd7DdeCW1euuYxFHWUX9Y9TJTkXrnJe4Rwf23tcOKjxS69U W5TOgMucfPtjFusrpLI4A+WudVcVDAuP0uGmMYWiueTDC+Kv+f6PLu0J2vuYyCd44KuG WqKOu89BQUGkL3LYXVMtjtqKkeU819eU6g6VU3FEUoKJnEbVpKypu1cJXjxvCE/isIxp dFjw== X-Gm-Message-State: AGRZ1gLpdoVs4sgK7Kw9C2XI9tzg9fUd8g0/u9qN39QnD7O/P/PCzGpw UyKgRYoevaTSi8eAcEzGUN9wd1svBzw= X-Received: by 2002:a5d:67d2:: with SMTP id n18-v6mr4968255wrw.245.1540278186913; Tue, 23 Oct 2018 00:03:06 -0700 (PDT) Received: from cloudburst.twiddle.net.ASUS (host86-153-40-62.range86-153.btcentralplus.com. [86.153.40.62]) by smtp.gmail.com with ESMTPSA id t13-v6sm258355wrn.22.2018.10.23.00.03.05 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 23 Oct 2018 00:03:05 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Tue, 23 Oct 2018 08:02:49 +0100 Message-Id: <20181023070253.6407-8-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20181023070253.6407-1-richard.henderson@linaro.org> References: <20181023070253.6407-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::444 Subject: [Qemu-devel] [PATCH 06/10] cputlb: Merge tlb_flush_nocheck into tlb_flush_by_mmuidx_async_work X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: cota@braap.org Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" The difference between the two sets of APIs is now miniscule. This allows tlb_flush, tlb_flush_all_cpus, and tlb_flush_all_cpus_synced to be merged with their corresponding by_mmuidx functions as well. For accounting, consider mmu_idx_bitmask = ALL_MMUIDX_BITS to be a full flush. Signed-off-by: Richard Henderson --- accel/tcg/cputlb.c | 93 +++++++++++----------------------------------- 1 file changed, 21 insertions(+), 72 deletions(-) -- 2.17.2 diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index d3b37ffa85..6b0f93ec01 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -122,75 +122,6 @@ static void tlb_flush_one_mmuidx_locked(CPUArchState *env, int mmu_idx) env->tlb_d[mmu_idx].vindex = 0; } -/* This is OK because CPU architectures generally permit an - * implementation to drop entries from the TLB at any time, so - * flushing more entries than required is only an efficiency issue, - * not a correctness issue. - */ -static void tlb_flush_nocheck(CPUState *cpu) -{ - CPUArchState *env = cpu->env_ptr; - int mmu_idx; - - assert_cpu_is_self(cpu); - atomic_set(&env->tlb_flush_count, env->tlb_flush_count + 1); - tlb_debug("(count: %zu)\n", tlb_flush_count()); - - /* - * tlb_table/tlb_v_table updates from any thread must hold tlb_c.lock. - * However, updates from the owner thread (as is the case here; see the - * above assert_cpu_is_self) do not need atomic_set because all reads - * that do not hold the lock are performed by the same owner thread. - */ - qemu_spin_lock(&env->tlb_c.lock); - env->tlb_c.pending_flush = 0; - for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { - tlb_flush_one_mmuidx_locked(env, mmu_idx); - } - qemu_spin_unlock(&env->tlb_c.lock); - - cpu_tb_jmp_cache_clear(cpu); -} - -static void tlb_flush_global_async_work(CPUState *cpu, run_on_cpu_data data) -{ - tlb_flush_nocheck(cpu); -} - -void tlb_flush(CPUState *cpu) -{ - if (cpu->created && !qemu_cpu_is_self(cpu)) { - CPUArchState *env = cpu->env_ptr; - uint16_t pending; - - qemu_spin_lock(&env->tlb_c.lock); - pending = env->tlb_c.pending_flush; - env->tlb_c.pending_flush = ALL_MMUIDX_BITS; - qemu_spin_unlock(&env->tlb_c.lock); - - if (pending != ALL_MMUIDX_BITS) { - async_run_on_cpu(cpu, tlb_flush_global_async_work, - RUN_ON_CPU_NULL); - } - } else { - tlb_flush_nocheck(cpu); - } -} - -void tlb_flush_all_cpus(CPUState *src_cpu) -{ - const run_on_cpu_func fn = tlb_flush_global_async_work; - flush_all_helper(src_cpu, fn, RUN_ON_CPU_NULL); - fn(src_cpu, RUN_ON_CPU_NULL); -} - -void tlb_flush_all_cpus_synced(CPUState *src_cpu) -{ - const run_on_cpu_func fn = tlb_flush_global_async_work; - flush_all_helper(src_cpu, fn, RUN_ON_CPU_NULL); - async_safe_run_on_cpu(src_cpu, fn, RUN_ON_CPU_NULL); -} - static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data data) { CPUArchState *env = cpu->env_ptr; @@ -212,13 +143,17 @@ static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data data) qemu_spin_unlock(&env->tlb_c.lock); cpu_tb_jmp_cache_clear(cpu); + + if (mmu_idx_bitmask == ALL_MMUIDX_BITS) { + atomic_set(&env->tlb_flush_count, env->tlb_flush_count + 1); + } } void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap) { tlb_debug("mmu_idx: 0x%" PRIx16 "\n", idxmap); - if (!qemu_cpu_is_self(cpu)) { + if (cpu->created && !qemu_cpu_is_self(cpu)) { CPUArchState *env = cpu->env_ptr; uint16_t pending, to_clean; @@ -238,6 +173,11 @@ void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap) } } +void tlb_flush(CPUState *cpu) +{ + tlb_flush_by_mmuidx(cpu, ALL_MMUIDX_BITS); +} + void tlb_flush_by_mmuidx_all_cpus(CPUState *src_cpu, uint16_t idxmap) { const run_on_cpu_func fn = tlb_flush_by_mmuidx_async_work; @@ -248,8 +188,12 @@ void tlb_flush_by_mmuidx_all_cpus(CPUState *src_cpu, uint16_t idxmap) fn(src_cpu, RUN_ON_CPU_HOST_INT(idxmap)); } -void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *src_cpu, - uint16_t idxmap) +void tlb_flush_all_cpus(CPUState *src_cpu) +{ + tlb_flush_by_mmuidx_all_cpus(src_cpu, ALL_MMUIDX_BITS); +} + +void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *src_cpu, uint16_t idxmap) { const run_on_cpu_func fn = tlb_flush_by_mmuidx_async_work; @@ -259,6 +203,11 @@ void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *src_cpu, async_safe_run_on_cpu(src_cpu, fn, RUN_ON_CPU_HOST_INT(idxmap)); } +void tlb_flush_all_cpus_synced(CPUState *src_cpu) +{ + tlb_flush_by_mmuidx_all_cpus_synced(src_cpu, ALL_MMUIDX_BITS); +} + static inline bool tlb_hit_page_anyprot(CPUTLBEntry *tlb_entry, target_ulong page) {