From patchwork Fri Aug 24 15:52:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 145074 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp1410386ljw; Fri, 24 Aug 2018 08:52:42 -0700 (PDT) X-Google-Smtp-Source: ANB0VdaXUFSxFK7cKy8jsbAlT++GYpLboeyOVzRkhfxdyE5IAuzuxNxp/hmoNG2i2vJqGR0jHrYN X-Received: by 2002:a17:902:7246:: with SMTP id c6-v6mr2274750pll.38.1535125962639; Fri, 24 Aug 2018 08:52:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535125962; cv=none; d=google.com; s=arc-20160816; b=QZi4Yekzx8dQ3tNHorzlhTRNBrGXrK+yDfAku5ExBZKSeudoVmXoVCzFl61wF5Mzg3 edJgnXiS+HjtD0o6XbHU+2SNBqzOcIS8ze7JQsE9k5+nje8r0mL2gLEQ4zmg8HBqWXQx miYZh1i23qjOK4PuX6Z58bd1ySSBBsUFdDp4C5SGrKII1gex0DG32QzhjvihA2uvKcG9 DprSnLQkTjz+GFoA2JsjGK8bWx89qoBaly7Y2qNIFgcLerps87HyoJHsEsjds9HFgC6S QOZiJzVH1J7xOXEVkXZRjGKfvX7x+9If61JWQl/PN3iz4zo2nsCwdDQp32ipLendV+pF rOUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=WdmmoQ8xGgf5wnEtraTFiDTfm47hxGszZAHH8LfjiK8=; b=MIOfiEVLoWlFreTbnGsEPBB9JgyDm/Q2irn5RSRltrth6krXnfW15PvblA6cU+DgdM OgokZdB4pCS02OQZOB6vTiBghndPUp2VUVoCgWOsWkuTZXAFcxEXLMfULBzpCvJj1Oda zeWtBkx63wcumPkjfqP9qYE4FrCzAySeUA9iK2K736DvtcOuL5vtsnByJshrFYVG8GKv frZB37VxOVqCNe71oavDRNbOBfwWbFFfsfgev/Sf09Lsumwxrbqoqk8H1kVF/OW/UtJn 9WGrreyz2d4BLXhUt15N4UEIcauIy6P08UGlfvK0VDMw9BXhvfy62cR8Lbpc4QNxeYte 9MBA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p63-v6si5829157pgp.651.2018.08.24.08.52.42; Fri, 24 Aug 2018 08:52:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727712AbeHXT1w (ORCPT + 32 others); Fri, 24 Aug 2018 15:27:52 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:32858 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726374AbeHXT1v (ORCPT ); Fri, 24 Aug 2018 15:27:51 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BC497168F; Fri, 24 Aug 2018 08:52:38 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8DC693F5BC; Fri, 24 Aug 2018 08:52:38 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id E75951AE3326; Fri, 24 Aug 2018 16:52:47 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, benh@au1.ibm.com, torvalds@linux-foundation.org, npiggin@gmail.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org, Will Deacon Subject: [RFC PATCH 05/11] arm64: tlbflush: Allow stride to be specified for __flush_tlb_range() Date: Fri, 24 Aug 2018 16:52:40 +0100 Message-Id: <1535125966-7666-6-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1535125966-7666-1-git-send-email-will.deacon@arm.com> References: <1535125966-7666-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When we are unmapping intermediate page-table entries or huge pages, we don't need to issue a TLBI instruction for every PAGE_SIZE chunk in the VA range being unmapped. Allow the invalidation stride to be passed to __flush_tlb_range(), and adjust our "just nuke the ASID" heuristic to take this into account. Signed-off-by: Will Deacon --- arch/arm64/include/asm/tlb.h | 2 +- arch/arm64/include/asm/tlbflush.h | 11 +++++++---- 2 files changed, 8 insertions(+), 5 deletions(-) -- 2.1.4 diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index a3233167be60..1e1f68ce28f4 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -53,7 +53,7 @@ static inline void tlb_flush(struct mmu_gather *tlb) * the __(pte|pmd|pud)_free_tlb() functions, so last level * TLBI is sufficient here. */ - __flush_tlb_range(&vma, tlb->start, tlb->end, true); + __flush_tlb_range(&vma, tlb->start, tlb->end, PAGE_SIZE, true); } static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index ddbf1718669d..1f77d08e638b 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -153,12 +153,15 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, - bool last_level) + unsigned long stride, bool last_level) { unsigned long asid = ASID(vma->vm_mm); unsigned long addr; - if ((end - start) > MAX_TLB_RANGE) { + /* Convert the stride into units of 4k */ + stride >>= 12; + + if ((end - start) > (MAX_TLB_RANGE * stride)) { flush_tlb_mm(vma->vm_mm); return; } @@ -167,7 +170,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, end = __TLBI_VADDR(end, asid); dsb(ishst); - for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) { + for (addr = start; addr < end; addr += stride) { if (last_level) { __tlbi(vale1is, addr); __tlbi_user(vale1is, addr); @@ -186,7 +189,7 @@ static inline void flush_tlb_range(struct vm_area_struct *vma, * We cannot use leaf-only invalidation here, since we may be invalidating * table entries as part of collapsing hugepages or moving page tables. */ - __flush_tlb_range(vma, start, end, false); + __flush_tlb_range(vma, start, end, PAGE_SIZE, false); } static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end)