From patchwork Fri Aug 24 15:52:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 145075 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp1410395ljw; Fri, 24 Aug 2018 08:52:43 -0700 (PDT) X-Google-Smtp-Source: ANB0VdbGa9DeDXuj3IHSMpisG7mIRRMtgF/GT9xEzEd7XMn0GhDq8SIbI37dbTGBglkS5v5dbcyz X-Received: by 2002:a62:7088:: with SMTP id l130-v6mr2585051pfc.144.1535125963139; Fri, 24 Aug 2018 08:52:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535125963; cv=none; d=google.com; s=arc-20160816; b=yWUKfrGd0dTp4qUev4imfrGGXJTq4pPQgLmFCwHAMg2qHPOyxJhOYwmEVB02tNNtif pqAy+lPUSSXc0l3HCBhYHxwUzS+9ZTbl8SctH7AoDmbuwETPVHC3nsVWW9m15Ff1ouyT iX7Z0XDhknSwjCYD1Js3cV6yW+sR21rr38DuJSZZj8uTRl2EyIZZLYFrZ1grca42uhG9 afnQo6LuHjxHAnb8jRtxgsUhGN1iYJQwRAcsSiZQknTYz2UO1CVHjp7iYTXGn+7hTTb3 FdBOvptZ76Lu1ktGepb8E/IRr30XAHX6NmcJFbX6L7iv2BADsjtLM48sGOvmaz4RR2kH zQjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=AUM4EXrKH8SY7z1d8V/QVUkV+WfTR8znL8jv7IfUU1U=; b=p9QBVwN0CuB4rwQ6Hdk1k0OpjJwI/sIwRppWwUY9G0/DMe37e4pUY3jiO91EdzvipN o/6xTSE3pxEttfzoET6apQ/rspSC15fzXIQyRWtMcgNH0eNnZynxg5crmf5coAGWerhT M85cbBNRVERjOKhnjC8RZEuPIyD4OIAiVqwwBF2InpOmpy5zpIanZrgtnJhC7qmJ695G /t057M62p4okk2oOixPPlXIq2YmNtuigu5Te9BnKtM8n0Q9pkqZDpUxiQgFHi37s9xza 5wCuIKYsEY3WAlKMONUy5UGSiFhIr2ITo2PXiD2Kj0JuRqRlN+9O0Q3UeJYFQ7smlsHP VO0g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p63-v6si5829157pgp.651.2018.08.24.08.52.42; Fri, 24 Aug 2018 08:52:43 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727836AbeHXT1y (ORCPT + 32 others); Fri, 24 Aug 2018 15:27:54 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:32904 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727615AbeHXT1w (ORCPT ); Fri, 24 Aug 2018 15:27:52 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F1E391AED; Fri, 24 Aug 2018 08:52:38 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C42433F5BC; Fri, 24 Aug 2018 08:52:38 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 623591AE33E9; Fri, 24 Aug 2018 16:52:48 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, benh@au1.ibm.com, torvalds@linux-foundation.org, npiggin@gmail.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org, Will Deacon Subject: [RFC PATCH 11/11] arm64: tlb: Avoid synchronous TLBIs when freeing page tables Date: Fri, 24 Aug 2018 16:52:46 +0100 Message-Id: <1535125966-7666-12-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1535125966-7666-1-git-send-email-will.deacon@arm.com> References: <1535125966-7666-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org By selecting HAVE_RCU_TABLE_INVALIDATE, we can rely on tlb_flush() being called if we fail to batch table pages for freeing. This in turn allows us to postpone walk-cache invalidation until tlb_finish_mmu(), which avoids lots of unnecessary DSBs and means we can shoot down the ASID if the range is large enough. Signed-off-by: Will Deacon --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/tlb.h | 3 --- arch/arm64/include/asm/tlbflush.h | 11 ----------- 3 files changed, 1 insertion(+), 14 deletions(-) -- 2.1.4 diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 29e75b47becd..89059ee1eccc 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -142,6 +142,7 @@ config ARM64 select HAVE_PERF_USER_STACK_DUMP select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RCU_TABLE_FREE + select HAVE_RCU_TABLE_INVALIDATE select HAVE_RSEQ select HAVE_STACKPROTECTOR select HAVE_SYSCALL_TRACEPOINTS diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index baca8dff6884..4f65614b3141 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -54,7 +54,6 @@ static inline void tlb_flush(struct mmu_gather *tlb) static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { - __flush_tlb_pgtable(tlb->mm, addr); pgtable_page_dtor(pte); tlb_remove_table(tlb, pte); } @@ -63,7 +62,6 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) { - __flush_tlb_pgtable(tlb->mm, addr); tlb_remove_table(tlb, virt_to_page(pmdp)); } #endif @@ -72,7 +70,6 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, unsigned long addr) { - __flush_tlb_pgtable(tlb->mm, addr); tlb_remove_table(tlb, virt_to_page(pudp)); } #endif diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 1f77d08e638b..2064ba97845f 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -215,17 +215,6 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end * Used to invalidate the TLB (walk caches) corresponding to intermediate page * table levels (pgd/pud/pmd). */ -static inline void __flush_tlb_pgtable(struct mm_struct *mm, - unsigned long uaddr) -{ - unsigned long addr = __TLBI_VADDR(uaddr, ASID(mm)); - - dsb(ishst); - __tlbi(vae1is, addr); - __tlbi_user(vae1is, addr); - dsb(ish); -} - static inline void __flush_tlb_kernel_pgtable(unsigned long kaddr) { unsigned long addr = __TLBI_VADDR(kaddr, 0);