From patchwork Fri Jan 27 10:34:57 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 92638 Delivered-To: patch@linaro.org Received: by 10.140.20.99 with SMTP id 90csp187029qgi; Fri, 27 Jan 2017 03:29:04 -0800 (PST) X-Received: by 10.55.178.132 with SMTP id b126mr7203793qkf.225.1485516544821; Fri, 27 Jan 2017 03:29:04 -0800 (PST) Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id d39si3315380qtf.224.2017.01.27.03.29.04 for (version=TLS1 cipher=AES128-SHA bits=128/128); Fri, 27 Jan 2017 03:29:04 -0800 (PST) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom=qemu-devel-bounces+patch=linaro.org@nongnu.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:44528 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cX4iD-0008Nn-CT for patch@linaro.org; Fri, 27 Jan 2017 06:29:01 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49523) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cX40J-0007bJ-0I for qemu-devel@nongnu.org; Fri, 27 Jan 2017 05:43:40 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cX40H-0004dk-A9 for qemu-devel@nongnu.org; Fri, 27 Jan 2017 05:43:39 -0500 Received: from mail-wm0-x22f.google.com ([2a00:1450:400c:c09::22f]:36931) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1cX40H-0004cx-0Y for qemu-devel@nongnu.org; Fri, 27 Jan 2017 05:43:37 -0500 Received: by mail-wm0-x22f.google.com with SMTP id c206so130356319wme.0 for ; Fri, 27 Jan 2017 02:43:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kzGiKrvslo18E91aONEhVqBi6bz2lY/TTgH5q2P/rEM=; b=gel96Rhr92t4B/R71l1do/+D1V4KnpILfOLOLrE5MVa1ZjsuCpSMVoc+Fp6SSPnOL8 D3tUC3c7Ue4WFin6XkUBZ04UPK6iGAjZZ6rBqLlSewGwCmyZjTe/DSdJEmO7ru0xh31k pcZlt4dCOQhAt8GAndQDq7d4V11hQtUCOUPS4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kzGiKrvslo18E91aONEhVqBi6bz2lY/TTgH5q2P/rEM=; b=tOFSmw7rY5/DHhZMXMdZGZE/ZkzdtKO/VTUSHN+lAZgQzujkpApNEv8cci7Nf9BW13 76DBNrFO7MF48tuNsTRiIIsdOTwZRoQM0v6pn9VlgsQGqwy8JYzO9KK9eI6kmaKfkrjX 1PXezh/t9wAJmy6yMRs66TjN6uzFYzqnjZHHlIPT9mZr2SouXJh6JQSgbTrU9v2cKJkx pbfQsWyfn0VHuhIA7/yOwiLcoIGNqmIIFCDvfXMn3NrG70Qzq+evsG1AoJTtgIvXKlfr 6FwQ4Zz6jBU+iK7zlWv+LhECLYGuDyNtYuc2iQV9CHNCYHyYnl/1tWL+ZBSl9Ru3zZCx QoTQ== X-Gm-Message-State: AIkVDXLW1xPGSZtXK0SKvWy/w9CcaZiqjYLLbqemUE5LYaAKY5PktECeDcUchlJrhTKR9V1F X-Received: by 10.28.64.213 with SMTP id n204mr2746598wma.12.1485513815872; Fri, 27 Jan 2017 02:43:35 -0800 (PST) Received: from zen.linaro.local ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id h3sm7222129wrb.31.2017.01.27.02.43.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 27 Jan 2017 02:43:33 -0800 (PST) Received: from zen.linaroharston (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id D5B593E3769; Fri, 27 Jan 2017 10:35:06 +0000 (GMT) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: Date: Fri, 27 Jan 2017 10:34:57 +0000 Message-Id: <20170127103505.18606-18-alex.bennee@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170127103505.18606-1-alex.bennee@linaro.org> References: <20170127103505.18606-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::22f Subject: [Qemu-devel] [PATCH v8 17/25] cputlb: add tlb_flush_by_mmuidx async routines X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paolo Bonzini , Richard Henderson , =?utf-8?q?Alex_Benn=C3=A9e?= , "open list:Overall" , Peter Crosthwaite Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" This converts the remaining TLB flush routines to use async work when detecting a cross-vCPU flush. The only minor complication is having to serialise the var_list of MMU indexes into a form that can be punted to an asynchronous job. The pending_tlb_flush field on QOM's CPU structure also becomes a bitfield rather than a boolean. Signed-off-by: Alex Bennée --- v7 - un-merged from the atomic cputlb patch in the last series - fix long line reported by checkpatch v8 - re-base merge/fixes --- cputlb.c | 110 +++++++++++++++++++++++++++++++++++++++++++----------- include/qom/cpu.h | 2 +- 2 files changed, 89 insertions(+), 23 deletions(-) -- 2.11.0 diff --git a/cputlb.c b/cputlb.c index 97e5c12de8..c50254be26 100644 --- a/cputlb.c +++ b/cputlb.c @@ -68,6 +68,11 @@ * target_ulong even on 32 bit builds */ QEMU_BUILD_BUG_ON(sizeof(target_ulong) > sizeof(run_on_cpu_data)); +/* We currently can't handle more than 16 bits in the MMUIDX bitmask. + */ +QEMU_BUILD_BUG_ON(NB_MMU_MODES > 16); +#define ALL_MMUIDX_BITS ((1 << NB_MMU_MODES) - 1) + /* statistics */ int tlb_flush_count; @@ -102,7 +107,7 @@ static void tlb_flush_nocheck(CPUState *cpu) tb_unlock(); - atomic_mb_set(&cpu->pending_tlb_flush, false); + atomic_mb_set(&cpu->pending_tlb_flush, 0); } static void tlb_flush_global_async_work(CPUState *cpu, run_on_cpu_data data) @@ -113,7 +118,8 @@ static void tlb_flush_global_async_work(CPUState *cpu, run_on_cpu_data data) void tlb_flush(CPUState *cpu) { if (cpu->created && !qemu_cpu_is_self(cpu)) { - if (atomic_cmpxchg(&cpu->pending_tlb_flush, false, true) == true) { + if (atomic_mb_read(&cpu->pending_tlb_flush) != ALL_MMUIDX_BITS) { + atomic_mb_set(&cpu->pending_tlb_flush, ALL_MMUIDX_BITS); async_run_on_cpu(cpu, tlb_flush_global_async_work, RUN_ON_CPU_NULL); } @@ -122,17 +128,18 @@ void tlb_flush(CPUState *cpu) } } -static inline void v_tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap) +static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data data) { CPUArchState *env = cpu->env_ptr; - unsigned long mmu_idx_bitmask = idxmap; + unsigned long mmu_idx_bitmask = data.host_int; int mmu_idx; assert_cpu_is_self(cpu); - tlb_debug("start\n"); tb_lock(); + tlb_debug("start: mmu_idx:0x%04lx\n", mmu_idx_bitmask); + for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { if (test_bit(mmu_idx, &mmu_idx_bitmask)) { @@ -145,12 +152,30 @@ static inline void v_tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap) memset(cpu->tb_jmp_cache, 0, sizeof(cpu->tb_jmp_cache)); + tlb_debug("done\n"); + tb_unlock(); } void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap) { - v_tlb_flush_by_mmuidx(cpu, idxmap); + tlb_debug("mmu_idx: 0x%" PRIx16 "\n", idxmap); + + if (!qemu_cpu_is_self(cpu)) { + uint16_t pending_flushes = idxmap; + pending_flushes &= ~atomic_mb_read(&cpu->pending_tlb_flush); + + if (pending_flushes) { + tlb_debug("reduced mmu_idx: 0x%" PRIx16 "\n", pending_flushes); + + atomic_or(&cpu->pending_tlb_flush, pending_flushes); + async_run_on_cpu(cpu, tlb_flush_by_mmuidx_async_work, + RUN_ON_CPU_HOST_INT(pending_flushes)); + } + } else { + tlb_flush_by_mmuidx_async_work(cpu, + RUN_ON_CPU_HOST_INT(idxmap)); + } } static inline void tlb_flush_entry(CPUTLBEntry *tlb_entry, target_ulong addr) @@ -215,27 +240,26 @@ void tlb_flush_page(CPUState *cpu, target_ulong addr) } } -void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, uint16_t idxmap) +/* As we are going to hijack the bottom bits of the page address for a + * mmuidx bit mask we need to fail to build if we can't do that + */ +QEMU_BUILD_BUG_ON(NB_MMU_MODES > TARGET_PAGE_BITS_MIN); + +static void tlb_flush_page_by_mmuidx_async_work(CPUState *cpu, + run_on_cpu_data data) { CPUArchState *env = cpu->env_ptr; - unsigned long mmu_idx_bitmap = idxmap; - int i, page, mmu_idx; + target_ulong addr_and_mmuidx = (target_ulong) data.target_ptr; + target_ulong addr = addr_and_mmuidx & TARGET_PAGE_MASK; + unsigned long mmu_idx_bitmap = addr_and_mmuidx & ALL_MMUIDX_BITS; + int page = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); + int mmu_idx; + int i; assert_cpu_is_self(cpu); - tlb_debug("addr "TARGET_FMT_lx"\n", addr); - - /* Check if we need to flush due to large pages. */ - if ((addr & env->tlb_flush_mask) == env->tlb_flush_addr) { - tlb_debug("forced full flush (" - TARGET_FMT_lx "/" TARGET_FMT_lx ")\n", - env->tlb_flush_addr, env->tlb_flush_mask); - - v_tlb_flush_by_mmuidx(cpu, idxmap); - return; - } - addr &= TARGET_PAGE_MASK; - page = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); + tlb_debug("page:%d addr:"TARGET_FMT_lx" mmu_idx:0x%lx\n", + page, addr, mmu_idx_bitmap); for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { if (test_bit(mmu_idx, &mmu_idx_bitmap)) { @@ -251,6 +275,48 @@ void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, uint16_t idxmap) tb_flush_jmp_cache(cpu, addr); } +static void tlb_check_page_and_flush_by_mmuidx_async_work(CPUState *cpu, + run_on_cpu_data data) +{ + CPUArchState *env = cpu->env_ptr; + target_ulong addr_and_mmuidx = (target_ulong) data.target_ptr; + target_ulong addr = addr_and_mmuidx & TARGET_PAGE_MASK; + unsigned long mmu_idx_bitmap = addr_and_mmuidx & ALL_MMUIDX_BITS; + + tlb_debug("addr:"TARGET_FMT_lx" mmu_idx: %04lx\n", addr, mmu_idx_bitmap); + + /* Check if we need to flush due to large pages. */ + if ((addr & env->tlb_flush_mask) == env->tlb_flush_addr) { + tlb_debug("forced full flush (" + TARGET_FMT_lx "/" TARGET_FMT_lx ")\n", + env->tlb_flush_addr, env->tlb_flush_mask); + + tlb_flush_by_mmuidx_async_work(cpu, + RUN_ON_CPU_HOST_INT(mmu_idx_bitmap)); + } else { + tlb_flush_page_by_mmuidx_async_work(cpu, data); + } +} + +void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, uint16_t idxmap) +{ + target_ulong addr_and_mmu_idx; + + tlb_debug("addr: "TARGET_FMT_lx" mmu_idx:%" PRIx16 "\n", addr, idxmap); + + /* This should already be page aligned */ + addr_and_mmu_idx = addr & TARGET_PAGE_MASK; + addr_and_mmu_idx |= idxmap; + + if (!qemu_cpu_is_self(cpu)) { + async_run_on_cpu(cpu, tlb_check_page_and_flush_by_mmuidx_async_work, + RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx)); + } else { + tlb_check_page_and_flush_by_mmuidx_async_work( + cpu, RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx)); + } +} + void tlb_flush_page_all(target_ulong addr) { CPUState *cpu; diff --git a/include/qom/cpu.h b/include/qom/cpu.h index 7f1d6a81a0..d996e5a0f4 100644 --- a/include/qom/cpu.h +++ b/include/qom/cpu.h @@ -403,7 +403,7 @@ struct CPUState { * avoid potential races. The aim of the flag is to avoid * unnecessary flushes. */ - bool pending_tlb_flush; + uint16_t pending_tlb_flush; }; QTAILQ_HEAD(CPUTailQ, CPUState);