From patchwork Thu Aug 30 16:15:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 145544 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp91766ljw; Thu, 30 Aug 2018 09:16:08 -0700 (PDT) X-Google-Smtp-Source: ANB0Vdam4wi+bDHkNEWFbmyUhuIMO+/jopQfHiE0j5Grz1tBr4LuIiiLhabhRcMZ89DStElFAGyV X-Received: by 2002:a63:4f64:: with SMTP id p36-v6mr10161586pgl.210.1535645768289; Thu, 30 Aug 2018 09:16:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535645768; cv=none; d=google.com; s=arc-20160816; b=P5AM17qLxVD98GvFnhwoonl7wOZd3TwOD+FvK5pUSHNR4wegEhW0+Taj0T8B8jXRYw YntEVgM4xOzMFVSs/zEgRE3HF3ahZWU4TTnUoZ1PsFrYIzs/QDsBNDQS6ysuedp8tf0L 8HpYcbp3ThDPNRPC4UMD+wArYbumUIGIKKxs8hTvZ1tz4k1DQvBm3Tc77jBPZT/M89s2 Czx7ayMVQXc2rol6hBU2B8x03DhfsCOTgW2i2N9dqtzEMfdZm+61JXOOJ7vHict9zQhe q0GXTtuEormUvlzQ9LiRX9H+y1JtpNgHET8fsgtV4bNKxaZQdBpTALXk1c+WRq7KkOVL Z9Ug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=fBg6/F4H5iyH9vEbZT1VOQAS0Ms2Vgwk2Y/MkAu5YsU=; b=UhhP5nSkfI/UC6PVJSwFd02g2FyrycziF7Ivh+7YUaqDQqgJYCgYvw91zMl7eB/80D QogGSiepB6we2Gdg0+a7YfxIu0tMZxecl0nX6XXJ8waH4eww3UrPFqt57BIscySF33yZ ufD0Tz4U9CMqt1m+NE2AoQjshh/BHp229lEGzwWPLxahNejKYEK4ZX457Rv/QG2K19SB 1pWPm0vzULLX4TUQdk4UpH52LPzQFCt0pFCDYHRQUdaVEziT5kn7p5zMweUnLkHkKva6 XRJJVblGpzF5Lo43v93yyojrUsnUQWOzqsgVkk0aaca5b2UdjAQkoY2EbHP55dYSbH8M eQtw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y185-v6si6792462pgb.219.2018.08.30.09.16.07; Thu, 30 Aug 2018 09:16:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727679AbeH3USn (ORCPT + 32 others); Thu, 30 Aug 2018 16:18:43 -0400 Received: from foss.arm.com ([217.140.101.70]:44904 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726122AbeH3USn (ORCPT ); Thu, 30 Aug 2018 16:18:43 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 204851BF7; Thu, 30 Aug 2018 09:15:50 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E650F3F557; Thu, 30 Aug 2018 09:15:49 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id C69BE1AE3A88; Thu, 30 Aug 2018 17:16:01 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, benh@au1.ibm.com, torvalds@linux-foundation.org, npiggin@gmail.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org, Will Deacon Subject: [PATCH 11/12] arm64: tlb: Avoid synchronous TLBIs when freeing page tables Date: Thu, 30 Aug 2018 17:15:45 +0100 Message-Id: <1535645747-9823-12-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1535645747-9823-1-git-send-email-will.deacon@arm.com> References: <1535645747-9823-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org By selecting HAVE_RCU_TABLE_INVALIDATE, we can rely on tlb_flush() being called if we fail to batch table pages for freeing. This in turn allows us to postpone walk-cache invalidation until tlb_finish_mmu(), which avoids lots of unnecessary DSBs and means we can shoot down the ASID if the range is large enough. Signed-off-by: Will Deacon --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/tlb.h | 3 --- arch/arm64/include/asm/tlbflush.h | 11 ----------- 3 files changed, 1 insertion(+), 14 deletions(-) -- 2.1.4 diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 29e75b47becd..89059ee1eccc 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -142,6 +142,7 @@ config ARM64 select HAVE_PERF_USER_STACK_DUMP select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RCU_TABLE_FREE + select HAVE_RCU_TABLE_INVALIDATE select HAVE_RSEQ select HAVE_STACKPROTECTOR select HAVE_SYSCALL_TRACEPOINTS diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index b078fdec10d5..106fdc951b6e 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -54,7 +54,6 @@ static inline void tlb_flush(struct mmu_gather *tlb) static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { - __flush_tlb_pgtable(tlb->mm, addr); pgtable_page_dtor(pte); tlb_remove_table(tlb, pte); } @@ -63,7 +62,6 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) { - __flush_tlb_pgtable(tlb->mm, addr); tlb_remove_table(tlb, virt_to_page(pmdp)); } #endif @@ -72,7 +70,6 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, unsigned long addr) { - __flush_tlb_pgtable(tlb->mm, addr); tlb_remove_table(tlb, virt_to_page(pudp)); } #endif diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 37ccdb246b20..c98ed8871030 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -215,17 +215,6 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end * Used to invalidate the TLB (walk caches) corresponding to intermediate page * table levels (pgd/pud/pmd). */ -static inline void __flush_tlb_pgtable(struct mm_struct *mm, - unsigned long uaddr) -{ - unsigned long addr = __TLBI_VADDR(uaddr, ASID(mm)); - - dsb(ishst); - __tlbi(vae1is, addr); - __tlbi_user(vae1is, addr); - dsb(ish); -} - static inline void __flush_tlb_kernel_pgtable(unsigned long kaddr) { unsigned long addr = __TLBI_VADDR(kaddr, 0);