From patchwork Thu Aug 30 16:15:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 145552 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp92514ljw; Thu, 30 Aug 2018 09:16:51 -0700 (PDT) X-Google-Smtp-Source: ANB0VdYfVW2GeLiYXBUpUOsUvvSYj7vkXq5DZnwZs56MBHGW9j0lVpdI/UWt9vs0PCLJj7Qxghqn X-Received: by 2002:a63:24c4:: with SMTP id k187-v6mr5498084pgk.162.1535645811673; Thu, 30 Aug 2018 09:16:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535645811; cv=none; d=google.com; s=arc-20160816; b=U7MRK5fLgmSPLMdUM7NDO8+FcLUi8iM960g4XU4mEZVqfcO+lMBz8u8FjWPpSLJ+Lx h9neVCvqys2ku0SQ/BWeLXXefVBOiHzHPIlnF5o35eUtWKfx5pAtJAOfQpRj5UxGO24C D12w1XjXOVhFs2cQ4ClhjYJc7b9MChwdfgbKIYSZas/rYLTfAUt9iX4+tHicwvLECa1E Ega01ExEPKPBultJjkbFO2CvjeIBJsLTO+mwoAOfkzpupIBqme31y3SKTYXm6YTisOMi 1yztTT/eZu4h+Stof8gSZ5UUld4F0NSCmfHdPM0uI/W70o8+d4tkaXmI8gcpcQWTE1og tjUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=Mc6pOOsmGvtQt/y+Ju/BH/GnTOxKCskvlW5ST031YVE=; b=OxjZW0cnxfolECmjsGuU8GDUPUvCtEBWRGYBOe+rAjahzY+zQS2S/7LgGmxzV7N6i6 QDgRug36pVaWRsCnWL4P+5wfRQ0rWa5IQ46gwZMNWXlUPK2TgCwFS+S6grweVqprkz2n Nxetzr1XFW0wEnURjUWAe3se20k2ioYachBruyQrDFZv+s6n/fzpOMNSBYOcMpLpjZHa XwLqOBom+82p7Pq743acj2+VD/xStAt04awa6MIaTtJIjsR9b4rIeMaIa/Lrc+jVXKr1 symgz4Xw2xJxgclftmykqSQ2M8/pegUjwohLwTQu0FacPOJ/5w+jvKlE031EmiR++ahs Sqkg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b5-v6si7333516pfg.90.2018.08.30.09.16.51; Thu, 30 Aug 2018 09:16:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727425AbeH3USm (ORCPT + 32 others); Thu, 30 Aug 2018 16:18:42 -0400 Received: from foss.arm.com ([217.140.101.70]:44790 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725893AbeH3USl (ORCPT ); Thu, 30 Aug 2018 16:18:41 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0D6E0ED1; Thu, 30 Aug 2018 09:15:49 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D2D113F738; Thu, 30 Aug 2018 09:15:48 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 1B6D31AE3610; Thu, 30 Aug 2018 17:16:01 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, benh@au1.ibm.com, torvalds@linux-foundation.org, npiggin@gmail.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org, Will Deacon Subject: [PATCH 01/12] arm64: tlb: Use last-level invalidation in flush_tlb_kernel_range() Date: Thu, 30 Aug 2018 17:15:35 +0100 Message-Id: <1535645747-9823-2-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1535645747-9823-1-git-send-email-will.deacon@arm.com> References: <1535645747-9823-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org flush_tlb_kernel_range() is only ever used to invalidate last-level entries, so we can restrict the scope of the TLB invalidation instruction. Signed-off-by: Will Deacon --- arch/arm64/include/asm/tlbflush.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 2.1.4 diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index a4a1901140ee..7e2a35424ca4 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -199,7 +199,7 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end dsb(ishst); for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) - __tlbi(vaae1is, addr); + __tlbi(vaale1is, addr); dsb(ish); isb(); } From patchwork Thu Aug 30 16:15:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 145550 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp92324ljw; Thu, 30 Aug 2018 09:16:40 -0700 (PDT) X-Google-Smtp-Source: ANB0Vda4Cr0Rve324vL8tAw6xJffAS/KNeETTUA+vcH9QnvVoKXa3NAUbc9YCpIuYKwsPQR9fNk4 X-Received: by 2002:a17:902:48c8:: with SMTP id u8-v6mr11096847plh.152.1535645800403; Thu, 30 Aug 2018 09:16:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535645800; cv=none; d=google.com; s=arc-20160816; b=seDNRjwFKa3b8hJGqI7/xwks64hO+CTEfGbdu6KeGFH9F5edp52r1cGlu+ivDekrzY tbP04hNvlPS+ZaGX3jgcP6+wPXkv6PIhBugn5ddaOWvLqdWqtDqRnH49xaBWPEiuEljn JxmxZSJ4xqYQSavwt79RX2JZ3urHG56ETwR+zVvYOoc9sHgdTsMyjv5ecwKN9AD4X5H1 xum8OUbiXk56fpdm2KXyTryxGdfmNsgwc1L04Ta770M7hsMowVwS9PhYJlWt7rPzFjrn sF/y9uxhniVK1ecjVktbO4yBJx2bIU6X7alEZqwgI8KRkbapyMpp2k0ZxujRwNjVmE8C xgbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=RozllPryD3GkPRtMWEWjoDAYyDfgAVoUGTzvkBtaSfE=; b=zGM08GmgZ3k9SUiTOLwkiQEVi3zAcrFSONGE8GmE7/qdA2OvzVdd7ebRKOtv2An7vY mn7TXXZLKm2S2BTIHdPax90jWAEwH+P46QszICM6lT32D9XqqgcmVIw6vB2xeSd62AcM K/rSSTrol9GZWxTl91BvxJ82mXgOUUuz+OnvF9YR9T9MZSN9XLAN6VRwfrfWoyN7Xuv6 ZNG7U0gLr2hK7H/t617CEcuEz+xC9WrBei/H7wjpZkHUL2zK/siD4FtpYDrWyTS8vbjs ulvmPBwcDyJUvxWj59cwZZZ+zQbptf9f5RI+jiGYwTnmTONCjImBslHktlTTaPRzU3Rn RUIw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 22-v6si7124203pfl.220.2018.08.30.09.16.40; Thu, 30 Aug 2018 09:16:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727577AbeH3USm (ORCPT + 32 others); Thu, 30 Aug 2018 16:18:42 -0400 Received: from foss.arm.com ([217.140.101.70]:44798 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726122AbeH3USl (ORCPT ); Thu, 30 Aug 2018 16:18:41 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1F33F15BF; Thu, 30 Aug 2018 09:15:49 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E4FB53F802; Thu, 30 Aug 2018 09:15:48 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 29EF21AE3630; Thu, 30 Aug 2018 17:16:01 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, benh@au1.ibm.com, torvalds@linux-foundation.org, npiggin@gmail.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org, Will Deacon Subject: [PATCH 02/12] arm64: tlb: Add DSB ISHST prior to TLBI in __flush_tlb_[kernel_]pgtable() Date: Thu, 30 Aug 2018 17:15:36 +0100 Message-Id: <1535645747-9823-3-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1535645747-9823-1-git-send-email-will.deacon@arm.com> References: <1535645747-9823-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org __flush_tlb_[kernel_]pgtable() rely on set_pXd() having a DSB after writing the new table entry and therefore avoid the barrier prior to the TLBI instruction. In preparation for delaying our walk-cache invalidation on the unmap() path, move the DSB into the TLB invalidation routines. Signed-off-by: Will Deacon --- arch/arm64/include/asm/tlbflush.h | 2 ++ 1 file changed, 2 insertions(+) -- 2.1.4 diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 7e2a35424ca4..e257f8655b84 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -213,6 +213,7 @@ static inline void __flush_tlb_pgtable(struct mm_struct *mm, { unsigned long addr = __TLBI_VADDR(uaddr, ASID(mm)); + dsb(ishst); __tlbi(vae1is, addr); __tlbi_user(vae1is, addr); dsb(ish); @@ -222,6 +223,7 @@ static inline void __flush_tlb_kernel_pgtable(unsigned long kaddr) { unsigned long addr = __TLBI_VADDR(kaddr, 0); + dsb(ishst); __tlbi(vaae1is, addr); dsb(ish); } From patchwork Thu Aug 30 16:15:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 145547 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp92100ljw; Thu, 30 Aug 2018 09:16:26 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZvdr9rGMDipWLIXQA4a+wLRAOJOb/QDl3YDkFbWSK2yxBidEW48znpjwhA2J7twSyH/oxa X-Received: by 2002:a63:1413:: with SMTP id u19-v6mr10431222pgl.247.1535645786234; Thu, 30 Aug 2018 09:16:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535645786; cv=none; d=google.com; s=arc-20160816; b=Eq/RijMchlRN0kuG1nDBmVcwLIuXbhZN6Q9HLHwEhdLFJ5gHLKTFpmGzSN6ceSJOXy 8Tw4EE8vnv25STsiK87JEdUeFWnzMkp70ngQ7++nYaEWR5FybBT2g485XLcHdZtOFXF2 MT4+695tIVoOFbMl9ZDcLLvtMDWQe0Z7aGxKwa6+ZGwcGEF6worO18i1mmvkiHklUibz sswkkjB5ENIhrJe40oL3FNccz4vLHMhWTathpcl+fAOnlot5Mnq7HCr+wpGhOGXDGBc3 ANeeDwvjcbW4CVk8PiiQsRqpCM6I+xZCBbpR4iGkh+OfGXp7QG5fMLcj8BfGGH7VEG9g ZAYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=mgLzZPBJn+RP+iAVaiUfifPZNPKh2V3/3J8/kCSjBtc=; b=qphlcKsbEMafZb6CIDsOCB0DHMkqxbLd3a08gfTj7MvRjbRyAX/Tb0EVyGMZQeD96h 3nGI7ApjfpCMNu8TqfIypJY5hHV+Ujl6xlxEqHbBWoUbccZf1s51D5l92A5xcEnanYxH Z/O9wnOe6wCWFj/eN1AbosTAqvXKuAuiM6gKHADWRgm4CFBgyVP51/KKZ4bVaW2ZrzTu CXYeiEfzvNI/PYpMcEXfGjd9gOHMDu5t7ORulmTKBWzFaH4UxlihTw4HeB3+472EQFAt mzXkgUQ4fkEFZ9Obto6r6AtbIpNdbjJEgBOccVaOb/WVKYjA217htE9HJemVlvUtMN2l J2Qw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t7-v6si7273901pgh.307.2018.08.30.09.16.25; Thu, 30 Aug 2018 09:16:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727610AbeH3USm (ORCPT + 32 others); Thu, 30 Aug 2018 16:18:42 -0400 Received: from foss.arm.com ([217.140.101.70]:44822 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726592AbeH3USl (ORCPT ); Thu, 30 Aug 2018 16:18:41 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2D6701684; Thu, 30 Aug 2018 09:15:49 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id F34253F819; Thu, 30 Aug 2018 09:15:48 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 3C3B11AE36A1; Thu, 30 Aug 2018 17:16:01 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, benh@au1.ibm.com, torvalds@linux-foundation.org, npiggin@gmail.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org, Will Deacon Subject: [PATCH 03/12] arm64: pgtable: Implement p[mu]d_valid() and check in set_p[mu]d() Date: Thu, 30 Aug 2018 17:15:37 +0100 Message-Id: <1535645747-9823-4-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1535645747-9823-1-git-send-email-will.deacon@arm.com> References: <1535645747-9823-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Now that our walk-cache invalidation routines imply a DSB before the invalidation, we no longer need one when we are clearing an entry during unmap. Signed-off-by: Will Deacon --- arch/arm64/include/asm/pgtable.h | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) -- 2.1.4 diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 1bdeca8918a6..2ab2031b778c 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -360,6 +360,7 @@ static inline int pmd_protnone(pmd_t pmd) #define pmd_present(pmd) pte_present(pmd_pte(pmd)) #define pmd_dirty(pmd) pte_dirty(pmd_pte(pmd)) #define pmd_young(pmd) pte_young(pmd_pte(pmd)) +#define pmd_valid(pmd) pte_valid(pmd_pte(pmd)) #define pmd_wrprotect(pmd) pte_pmd(pte_wrprotect(pmd_pte(pmd))) #define pmd_mkold(pmd) pte_pmd(pte_mkold(pmd_pte(pmd))) #define pmd_mkwrite(pmd) pte_pmd(pte_mkwrite(pmd_pte(pmd))) @@ -431,7 +432,9 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, static inline void set_pmd(pmd_t *pmdp, pmd_t pmd) { WRITE_ONCE(*pmdp, pmd); - dsb(ishst); + + if (pmd_valid(pmd)) + dsb(ishst); } static inline void pmd_clear(pmd_t *pmdp) @@ -477,11 +480,14 @@ static inline phys_addr_t pmd_page_paddr(pmd_t pmd) #define pud_none(pud) (!pud_val(pud)) #define pud_bad(pud) (!(pud_val(pud) & PUD_TABLE_BIT)) #define pud_present(pud) pte_present(pud_pte(pud)) +#define pud_valid(pud) pte_valid(pud_pte(pud)) static inline void set_pud(pud_t *pudp, pud_t pud) { WRITE_ONCE(*pudp, pud); - dsb(ishst); + + if (pud_valid(pud)) + dsb(ishst); } static inline void pud_clear(pud_t *pudp) From patchwork Thu Aug 30 16:15:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 145545 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp91918ljw; Thu, 30 Aug 2018 09:16:16 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZmbS2n1xIn0wiG9Cyb7nAXeJ1Uv+3IGgziCtNVgWqoGb41Iv8CL4/WRuuCX4a4LDeuHKrd X-Received: by 2002:a62:8913:: with SMTP id v19-v6mr11199903pfd.127.1535645776821; Thu, 30 Aug 2018 09:16:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535645776; cv=none; d=google.com; s=arc-20160816; b=R6M3V+iK3Z6v+mDgVwxSi4dBf2kmvZo5aN5el+zNxMPKomThZgaV6K93MOHiEzXbEN IgKRk9AfQ3DIDd2jwD1wLt9G3wwvXiX0AeVYtEtXoTDjmmDCF8u50ubk/EbmzgYBD652 D5793rsvSCB2O0CbrUBjddarJrtC8glEMnpb3EHUjuAeM1nz7S6Msv3U1oNUTvtKlCyH rejhAz2ZVo4D3hBcLboiGLgJsyNmTJWdMNTUVjgAIvfZrW8nVCKl2I82RE+hNrzxGVvv nML/Kz+14vChEOt+ZJZOjcam0EPyrhKQk9WUZcLkoUxO3JS7NSpyPzghUSbBtDvapTsM WbAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=O/F+Wwa3qzWoWLG2ynBUoJAdpqUksovup4kAYC73rqk=; b=fukbwWVHUwqRKoWeRAT21kHGQpvZ59zSaUw0Kc+ElzSKohwYrxjv2XW7jqztQXH90+ JkSUF//wqlCtmWIg5Ymy/UTEENxBGTYiyzwS7cbVLSU8dVJlZdIPzHSiuMf8d3bmeR7R 2kzB8XsJ74MCvcsHaEpf25fcU3vqUAmsfNGI9EAPdvElKJv/K3D6Vij+Who7DXGZXlaz BQ2Xu+r63894k8XSLGDii3+vIKVArrJbwcJgYu6YXztRi2p5aBu/j6Ts0GPVCOKj+ClF 1UDEntLYx6izhP0jrSwTxL/x5rH0DqMWOdoK9uk8u2e/Zr0+yGXd+sd0cZj8Nbzxqsn0 k2yw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t7-v6si7273901pgh.307.2018.08.30.09.16.16; Thu, 30 Aug 2018 09:16:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727633AbeH3USm (ORCPT + 32 others); Thu, 30 Aug 2018 16:18:42 -0400 Received: from foss.arm.com ([217.140.101.70]:44834 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726652AbeH3USl (ORCPT ); Thu, 30 Aug 2018 16:18:41 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3B26B168F; Thu, 30 Aug 2018 09:15:49 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0CED33F557; Thu, 30 Aug 2018 09:15:49 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 4AF5A1AE36AF; Thu, 30 Aug 2018 17:16:01 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, benh@au1.ibm.com, torvalds@linux-foundation.org, npiggin@gmail.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org, Will Deacon Subject: [PATCH 04/12] arm64: tlb: Justify non-leaf invalidation in flush_tlb_range() Date: Thu, 30 Aug 2018 17:15:38 +0100 Message-Id: <1535645747-9823-5-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1535645747-9823-1-git-send-email-will.deacon@arm.com> References: <1535645747-9823-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add a comment to explain why we can't get away with last-level invalidation in flush_tlb_range() Signed-off-by: Will Deacon --- arch/arm64/include/asm/tlbflush.h | 4 ++++ 1 file changed, 4 insertions(+) -- 2.1.4 diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index e257f8655b84..ddbf1718669d 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -182,6 +182,10 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, static inline void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { + /* + * We cannot use leaf-only invalidation here, since we may be invalidating + * table entries as part of collapsing hugepages or moving page tables. + */ __flush_tlb_range(vma, start, end, false); } From patchwork Thu Aug 30 16:15:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 145540 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp91472ljw; Thu, 30 Aug 2018 09:15:52 -0700 (PDT) X-Google-Smtp-Source: ANB0VdbV58DBwyhrdGU94fVsWR3iH9LnsjuZ7iw5H5q+qkm3OR6J+Vl9iCMOL+qmX4udYQm2SL/N X-Received: by 2002:a62:411a:: with SMTP id o26-v6mr11254095pfa.111.1535645752447; Thu, 30 Aug 2018 09:15:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535645752; cv=none; d=google.com; s=arc-20160816; b=nqk/lhXbSIU2kn5dXOMOVOJFMosOpRcBd1eghZ74oabYjEwThyKcH2Ig0lW877oanH uN8Br2+zebVifDW7QC9d+tX3UNTE9w0T5iQBlZ8gymIMlRjJFb4NMQVlNl9ll7f7xbh5 0d+ak4QFhbElAE8tqW1uChIT6qvJqo9G0OKngEvrzBOHUo7eNpt1jHOSu4FUZryaojAY 3BxgSmJJL+wMMHcgzjFzuJeiqpvOwY+g1A1hwp9RX0ukUSDYiSU+Pup4sGYr/clWoAuH S7c60LHYoxbSuAJSXOmGkX66ZR82WWtCukKTRc05rYypWkMMBTgyPx4jAZ8FfZxUeaP2 eOHw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=48MpQT/O6DRezqFyOD7X0Dg/T9dBqZQH37R/CBs/e9g=; b=LKDfLglkKfekLmRpS7jwfLUqifaa/K0MSsTsAJG9d00q0ivOUwxHz11FLWX4mykx0D 4mj1PTdXexzP0N5Hfe0Ru2wr25MZPc6pEc8N5yUGU8GL8HqJZstOKCXPNQuC21w3tYg8 2fBeWrhiDRJrIJcY3WjYESinnnJELzBhqoz9XEPk6T12AbSrvomFbXMtflv/gb7bsoMR OiiNxrBsRBVoRtVOCuxBpzvFRornIXo3tvjKMNp+4yU8P0iMQP7ko/7vhySAl7mzzOgG NBtV5lmIkWVsHrDrNnnsZMaSoQdnu5/TjJFnCB0utwJBm33sArrT5scmBKJqdIqBljsi Rvag== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s12-v6si6985382plr.120.2018.08.30.09.15.52; Thu, 30 Aug 2018 09:15:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727655AbeH3USn (ORCPT + 32 others); Thu, 30 Aug 2018 16:18:43 -0400 Received: from foss.arm.com ([217.140.101.70]:44846 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726122AbeH3USm (ORCPT ); Thu, 30 Aug 2018 16:18:42 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DE964174E; Thu, 30 Aug 2018 09:15:49 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B01633F557; Thu, 30 Aug 2018 09:15:49 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 58BE71AE3700; Thu, 30 Aug 2018 17:16:01 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, benh@au1.ibm.com, torvalds@linux-foundation.org, npiggin@gmail.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org, Will Deacon Subject: [PATCH 05/12] arm64: tlbflush: Allow stride to be specified for __flush_tlb_range() Date: Thu, 30 Aug 2018 17:15:39 +0100 Message-Id: <1535645747-9823-6-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1535645747-9823-1-git-send-email-will.deacon@arm.com> References: <1535645747-9823-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When we are unmapping intermediate page-table entries or huge pages, we don't need to issue a TLBI instruction for every PAGE_SIZE chunk in the VA range being unmapped. Allow the invalidation stride to be passed to __flush_tlb_range(), and adjust our "just nuke the ASID" heuristic to take this into account. Signed-off-by: Will Deacon --- arch/arm64/include/asm/tlb.h | 2 +- arch/arm64/include/asm/tlbflush.h | 15 +++++++++------ 2 files changed, 10 insertions(+), 7 deletions(-) -- 2.1.4 diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index a3233167be60..1e1f68ce28f4 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -53,7 +53,7 @@ static inline void tlb_flush(struct mmu_gather *tlb) * the __(pte|pmd|pud)_free_tlb() functions, so last level * TLBI is sufficient here. */ - __flush_tlb_range(&vma, tlb->start, tlb->end, true); + __flush_tlb_range(&vma, tlb->start, tlb->end, PAGE_SIZE, true); } static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index ddbf1718669d..37ccdb246b20 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -149,25 +149,28 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, * This is meant to avoid soft lock-ups on large TLB flushing ranges and not * necessarily a performance improvement. */ -#define MAX_TLB_RANGE (1024UL << PAGE_SHIFT) +#define MAX_TLBI_OPS 1024UL static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, - bool last_level) + unsigned long stride, bool last_level) { unsigned long asid = ASID(vma->vm_mm); unsigned long addr; - if ((end - start) > MAX_TLB_RANGE) { + if ((end - start) > (MAX_TLBI_OPS * stride)) { flush_tlb_mm(vma->vm_mm); return; } + /* Convert the stride into units of 4k */ + stride >>= 12; + start = __TLBI_VADDR(start, asid); end = __TLBI_VADDR(end, asid); dsb(ishst); - for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) { + for (addr = start; addr < end; addr += stride) { if (last_level) { __tlbi(vale1is, addr); __tlbi_user(vale1is, addr); @@ -186,14 +189,14 @@ static inline void flush_tlb_range(struct vm_area_struct *vma, * We cannot use leaf-only invalidation here, since we may be invalidating * table entries as part of collapsing hugepages or moving page tables. */ - __flush_tlb_range(vma, start, end, false); + __flush_tlb_range(vma, start, end, PAGE_SIZE, false); } static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end) { unsigned long addr; - if ((end - start) > MAX_TLB_RANGE) { + if ((end - start) > (MAX_TLBI_OPS * PAGE_SIZE)) { flush_tlb_all(); return; } From patchwork Thu Aug 30 16:15:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 145548 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp92127ljw; Thu, 30 Aug 2018 09:16:28 -0700 (PDT) X-Google-Smtp-Source: ANB0Vdb2C1CGTrR3JEzK217nVnhPxzmUDXPlbCZcv/vfQ+0o8CXIzWjY2z9/S5DoFBK23AsYWFEW X-Received: by 2002:a63:bd01:: with SMTP id a1-v6mr10348831pgf.12.1535645787687; Thu, 30 Aug 2018 09:16:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535645787; cv=none; d=google.com; s=arc-20160816; b=0c09nQazy7kT1BF2K5QaKdS6+jiTWA4tIzZz4oWhr5yV3DuP9/VEa2hhEinvCDLpuL 9Mn2cZMNzgmlTVTD79VQoSvlygUO9foBl/yKDn0wIV67atlCuinu0xxlhsgXUP7oDRrH kfG9ei641psm43JcbXPqO33KEczWq+3jWb4vvlhR0ljIUdLJ5zK1KCgFQsl8/oqefKK9 3oOnH9UVF2hB6cB9NBpUtPx844NFKW2DfhwXSjm7kIZsIaTF4H5d17tJ2BJYF+5duqOT PY7gT4AC5bespgnPSsLvZZpsLfcIpCdeL2KPr6svETqHJtFiTTkkeW1pK6hSGGhwyv/q mXRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=7vykjle0yOPkLV1is8B9sNa/1dZFWFfaIAu1AcfOKwU=; b=HZL5ui/7tFFY5CKEXpQGOc7oW1Wggsy3T7EEuy0L0Wre8SXMRe3admo+KZBrGA5iZN EWDQZ7+vLq/U9Gl2i+tTqXA4ljBxGEyH/HE+GCsKQRhxSDEghSDv+Kcb2HEM/oQqbfr/ mIjnQPysUPyPmz/Oht0AraBYTT8AULu4lfvhJbwVknC6XgVWGtB2HmUIl4MQF3xp3jlx krPdFrNwydfpW54EjGsrQbsZT5xYmcex7pzyOQDbqcPiJt/ULtWTuM9gstN1Kwew+PH1 x2W/g7uQyuTa0+C0UNFJ07WhPWPkrE2pPNg04Ibs3k83iHu9GJrI3T8diZ9IpHa3vT+S 512Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 22-v6si7124203pfl.220.2018.08.30.09.16.27; Thu, 30 Aug 2018 09:16:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727819AbeH3UTR (ORCPT + 32 others); Thu, 30 Aug 2018 16:19:17 -0400 Received: from foss.arm.com ([217.140.101.70]:44854 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727579AbeH3USm (ORCPT ); Thu, 30 Aug 2018 16:18:42 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E7B6C19BF; Thu, 30 Aug 2018 09:15:49 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B8ED23F738; Thu, 30 Aug 2018 09:15:49 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 6900C1AE3719; Thu, 30 Aug 2018 17:16:01 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, benh@au1.ibm.com, torvalds@linux-foundation.org, npiggin@gmail.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org, Will Deacon Subject: [PATCH 06/12] arm64: tlb: Remove redundant !CONFIG_HAVE_RCU_TABLE_FREE code Date: Thu, 30 Aug 2018 17:15:40 +0100 Message-Id: <1535645747-9823-7-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1535645747-9823-1-git-send-email-will.deacon@arm.com> References: <1535645747-9823-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org If there's one thing the RCU-based table freeing doesn't need, it's more ifdeffery. Remove the redundant !CONFIG_HAVE_RCU_TABLE_FREE code, since this option is unconditionally selected in our Kconfig. Signed-off-by: Will Deacon --- arch/arm64/include/asm/tlb.h | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) -- 2.1.4 diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index 1e1f68ce28f4..bd00017d529a 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -22,16 +22,10 @@ #include #include -#ifdef CONFIG_HAVE_RCU_TABLE_FREE - -#define tlb_remove_entry(tlb, entry) tlb_remove_table(tlb, entry) static inline void __tlb_remove_table(void *_table) { free_page_and_swap_cache((struct page *)_table); } -#else -#define tlb_remove_entry(tlb, entry) tlb_remove_page(tlb, entry) -#endif /* CONFIG_HAVE_RCU_TABLE_FREE */ static void tlb_flush(struct mmu_gather *tlb); @@ -61,7 +55,7 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, { __flush_tlb_pgtable(tlb->mm, addr); pgtable_page_dtor(pte); - tlb_remove_entry(tlb, pte); + tlb_remove_table(tlb, pte); } #if CONFIG_PGTABLE_LEVELS > 2 @@ -69,7 +63,7 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) { __flush_tlb_pgtable(tlb->mm, addr); - tlb_remove_entry(tlb, virt_to_page(pmdp)); + tlb_remove_table(tlb, virt_to_page(pmdp)); } #endif @@ -78,7 +72,7 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, unsigned long addr) { __flush_tlb_pgtable(tlb->mm, addr); - tlb_remove_entry(tlb, virt_to_page(pudp)); + tlb_remove_table(tlb, virt_to_page(pudp)); } #endif From patchwork Thu Aug 30 16:15:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 145549 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp92182ljw; Thu, 30 Aug 2018 09:16:31 -0700 (PDT) X-Google-Smtp-Source: ANB0Vda7j5UWJaTQYWSajLyyDrZJz4gzux0dnS5BNefqAbYw1ho1L7kT5c/3xFs4VrmGdT2cWur7 X-Received: by 2002:a63:5419:: with SMTP id i25-v6mr8568441pgb.345.1535645791375; Thu, 30 Aug 2018 09:16:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535645791; cv=none; d=google.com; s=arc-20160816; b=NJ3UPgIIVL1ChCAYijsjgO3rZ3LJPy1qUJX5R5jU8tq0W+hoTj8LikvHyXrr7jF16h 6HFL3yEFFsZDLX11izsiuryCbvS+OjQs56S2biovqYqPaXJX3BW89Ju7ANMbL8T5pPur KV4q1Q5dXfnj56avmiKIYbh1Ztt1FJJIqNykzSlhktbxILdY3PcQzogaEX0ip9NnR1b6 ldxvPJ/vyGndIEzlwsDo8GIWiG6odxe/RGIIXx66VbAE+a+JAH+CdSc7VJvPPur16zCo 46Bzc2W6REpPbyZtMmLQnNMb8fZAoWkRIMxezuMytpblBI8xa/wJ/n1XEUbJLNaVBEGN vJHw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=B1Nu5pXg02Xhcyn0ipv/Vcv2MEJluM9CQiZ5I2VVJoA=; b=uqxxgsmm2borMpbfnFk3V33Qi3RJhZg9lCGNTvY2/BavMQSBr7NYCC8TZJxZ2DmSNC IfwNHRPYCOzvOHwI1C0DaBWrwEEA919qPurjM5QoEcSRrLJUwZgg9iml/4i/okZ3Gh8D DEpIv7duK4dlZKo8XGWTVKheTPj54d+IquBkvZFSnznEOxveojm1JnvPfsLWjJ/UKpFY IaeZB+EpVZdCZk/eLg/dDyUztH7eWOwBlS+E7FmdjAhTxQmNfLa72A++1eXyxjWamcro U/gMzDMObbWeGmP341HYTBT4ByMhYnJiCG5gqNx2Gt+1F3IObeMcyiyj3DSex+5bNTRb q5QQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 22-v6si7124203pfl.220.2018.08.30.09.16.31; Thu, 30 Aug 2018 09:16:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727796AbeH3UTR (ORCPT + 32 others); Thu, 30 Aug 2018 16:19:17 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:44862 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727607AbeH3USm (ORCPT ); Thu, 30 Aug 2018 16:18:42 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F0B0F1AC1; Thu, 30 Aug 2018 09:15:49 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C1DDA3F802; Thu, 30 Aug 2018 09:15:49 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 794801AE3A71; Thu, 30 Aug 2018 17:16:01 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, benh@au1.ibm.com, torvalds@linux-foundation.org, npiggin@gmail.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org, Will Deacon Subject: [PATCH 07/12] asm-generic/tlb: Guard with #ifdef CONFIG_MMU Date: Thu, 30 Aug 2018 17:15:41 +0100 Message-Id: <1535645747-9823-8-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1535645747-9823-1-git-send-email-will.deacon@arm.com> References: <1535645747-9823-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The inner workings of the mmu_gather-based TLB invalidation mechanism are not relevant to nommu configurations, so guard them with an #ifdef. This allows us to implement future functions using static inlines without breaking the build. Signed-off-by: Will Deacon --- include/asm-generic/tlb.h | 4 ++++ 1 file changed, 4 insertions(+) -- 2.1.4 diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index b3353e21f3b3..a25e236f7a7f 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -20,6 +20,8 @@ #include #include +#ifdef CONFIG_MMU + #ifdef CONFIG_HAVE_RCU_TABLE_FREE /* * Semi RCU freeing of the page directories. @@ -310,6 +312,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #endif #endif +#endif /* CONFIG_MMU */ + #define tlb_migrate_finish(mm) do {} while (0) #endif /* _ASM_GENERIC__TLB_H */ From patchwork Thu Aug 30 16:15:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 145546 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp92015ljw; Thu, 30 Aug 2018 09:16:21 -0700 (PDT) X-Google-Smtp-Source: ANB0VdbSu6vKtzoT9Mt40t6xp7Za/n/SGY2O2GCwZCJbZl8ccAjMwNNuHlF/9mm845Cq2CqHe13J X-Received: by 2002:a17:902:543:: with SMTP id 61-v6mr11032481plf.126.1535645781619; Thu, 30 Aug 2018 09:16:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535645781; cv=none; d=google.com; s=arc-20160816; b=aJO3raRo5oiS9kI1F5e8vidIVkODaIiM2InK17VB3f45TctZAjHcnr7jV+1iQuCGmP 8O8nw32tJvMOnj/fHIaf+Qfrrde4gQ4EMkau43sre3lAvRs3I3qIpRIsnrAkrDJC87ki rArChjWq/aWacAW4Tvbvqij2jSRCQIOlOLgGlaKOCbeuBBFE8XQ5op++koEzHTaIpjUM io/S70YqMlbhnRbDeqCpC7Kh0EagTq6OTHre/SI9xQybHSJuSg4toOK0vvWBvAL22o9I kHuWkmlFMCQF3jDvVhRkXKDkskeUtU6Lr4x3Z4yVnB/z7xlzZI+0BdoOdsuZeA9te8KB 7LSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=ZfSEMpqkcRgvHnIpNQ31wW5KrAeyJCKYVUYCkLULD94=; b=Uvtj/AOXnmS9M1mJ8TA3JLVNzr/ULJa3dJbdw5kIWp86zT/6fNgNvLmUCLg4SF5wpK tcAkRh6uz3mfYj3wicb0+6aufd+tisBwRqhsIDspgCnp26Mz5XfkaZZjhfPQqpWqc3pS moZubF1qthctwkeFhdReKrGLcjT882ePIP2DAFdFEzebH6Rdcxgm/fWzD/uYX+M0dsHg wUJ2N86/c81t7DFlXeeR3jy/uBKQU6SfcDGiQyTqmrGUGk+bsN55cTAlgQ7/T9nWHh03 Nnz2Oqhem0hOqOUSTNWeTL/bqfv9d73BxiDiXdbGCeETMJT3VGQWZ/No0O13a4nGySEk 8oRA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t7-v6si7273901pgh.307.2018.08.30.09.16.20; Thu, 30 Aug 2018 09:16:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727773AbeH3UTL (ORCPT + 32 others); Thu, 30 Aug 2018 16:19:11 -0400 Received: from foss.arm.com ([217.140.101.70]:44870 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726592AbeH3USm (ORCPT ); Thu, 30 Aug 2018 16:18:42 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 055231AED; Thu, 30 Aug 2018 09:15:50 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id CB7B83F819; Thu, 30 Aug 2018 09:15:49 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 8DA801AE3A76; Thu, 30 Aug 2018 17:16:01 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, benh@au1.ibm.com, torvalds@linux-foundation.org, npiggin@gmail.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org, Will Deacon Subject: [PATCH 08/12] asm-generic/tlb: Track freeing of page-table directories in struct mmu_gather Date: Thu, 30 Aug 2018 17:15:42 +0100 Message-Id: <1535645747-9823-9-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1535645747-9823-1-git-send-email-will.deacon@arm.com> References: <1535645747-9823-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Peter Zijlstra Some architectures require different TLB invalidation instructions depending on whether it is only the last-level of page table being changed, or whether there are also changes to the intermediate (directory) entries higher up the tree. Add a new bit to the flags bitfield in struct mmu_gather so that the architecture code can operate accordingly if it's the intermediate levels being invalidated. Signed-off-by: Peter Zijlstra Signed-off-by: Will Deacon --- include/asm-generic/tlb.h | 31 +++++++++++++++++++++++-------- 1 file changed, 23 insertions(+), 8 deletions(-) -- 2.1.4 diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index a25e236f7a7f..2b444ad94566 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -99,12 +99,22 @@ struct mmu_gather { #endif unsigned long start; unsigned long end; - /* we are in the middle of an operation to clear - * a full mm and can make some optimizations */ - unsigned int fullmm : 1, - /* we have performed an operation which - * requires a complete flush of the tlb */ - need_flush_all : 1; + /* + * we are in the middle of an operation to clear + * a full mm and can make some optimizations + */ + unsigned int fullmm : 1; + + /* + * we have performed an operation which + * requires a complete flush of the tlb + */ + unsigned int need_flush_all : 1; + + /* + * we have removed page directories + */ + unsigned int freed_tables : 1; struct mmu_gather_batch *active; struct mmu_gather_batch local; @@ -139,6 +149,7 @@ static inline void __tlb_reset_range(struct mmu_gather *tlb) tlb->start = TASK_SIZE; tlb->end = 0; } + tlb->freed_tables = 0; } static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) @@ -280,6 +291,7 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define pte_free_tlb(tlb, ptep, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->freed_tables = 1; \ __pte_free_tlb(tlb, ptep, address); \ } while (0) #endif @@ -287,7 +299,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #ifndef pmd_free_tlb #define pmd_free_tlb(tlb, pmdp, address) \ do { \ - __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->freed_tables = 1; \ __pmd_free_tlb(tlb, pmdp, address); \ } while (0) #endif @@ -297,6 +310,7 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define pud_free_tlb(tlb, pudp, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->freed_tables = 1; \ __pud_free_tlb(tlb, pudp, address); \ } while (0) #endif @@ -306,7 +320,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #ifndef p4d_free_tlb #define p4d_free_tlb(tlb, pudp, address) \ do { \ - __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->freed_tables = 1; \ __p4d_free_tlb(tlb, pudp, address); \ } while (0) #endif From patchwork Thu Aug 30 16:15:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 145541 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp91507ljw; Thu, 30 Aug 2018 09:15:54 -0700 (PDT) X-Google-Smtp-Source: ANB0Vda+OAKxnsWT5mqFsfJ4bJ2KJR7so0Vps96H470Okypx9/p3Y+/VvIJLnVp6C3gMv8QIe43c X-Received: by 2002:a63:25c4:: with SMTP id l187-v6mr2927393pgl.29.1535645754025; Thu, 30 Aug 2018 09:15:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535645754; cv=none; d=google.com; s=arc-20160816; b=GrNzFqJwSKT4s0J+mSJIviifnwrqjbolbfjiTidVOY6EbajbXoBB21UaawTMPDqbOm yWZCLWsgXo6BYP8wgFqRJhjssaK7ZYnpejSnxZYtlwVZJw0QKuf+x1BtwgxXx7DdSXXD oUKJ/BW2xxKsHzN2tAQqQLDBdsLwt490z7DDOaDdjjO3HWldlNsvIjUITIReSU7yykD9 V8vTGzZR3Yr4wEyYeBM5mKdWjGH8eNGqPJ32mPHSDeliRC/kGi6R0tXu3ODQWWuft8zq JPujRRTRQKuyLoQ+9GO6R/0jRbk9rmRr59iJMKGcRg8rLVMipUrhEJIxy+UaDtkYdDJm ylCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=nKTh2h87IGhzz0IU3ONqDl5lCe5G3nOhnEHhJ2K5Hic=; b=a3Qb4zkWf0pRsAR/a2mNdbfwcYIo2XcZ7r799RR7B38vdeyett+JrN3gyMh1V2sQ21 7QakkvnOL4tGl6W06GsmvC70h5rXAW9rxYgBxtQ3hD2MrmK8YPInVFoabzVVMm/NKfat 2zqJzCBO5ZagOm4UM8ronWgI/V0/und0MR6Su08ESMGWKak5W8SkyVlMLfvBZaRVSXrY PtwCy/zcoSqlIEhY4x3Jj8HlpU9e/6omzUFPd1h4hW5wnOlP5UBTrZrfXqiCr1auso0D 2gIdJRrQ00awxuB4ZRD8twdM4vyFuDEnzeoMhtiX2w3Aj0tO/5gIJzyJxPKi0YAe/SYz TE0Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s12-v6si6985382plr.120.2018.08.30.09.15.53; Thu, 30 Aug 2018 09:15:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727746AbeH3USo (ORCPT + 32 others); Thu, 30 Aug 2018 16:18:44 -0400 Received: from foss.arm.com ([217.140.101.70]:44884 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727614AbeH3USn (ORCPT ); Thu, 30 Aug 2018 16:18:43 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1016C1B4B; Thu, 30 Aug 2018 09:15:50 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D49E13F93D; Thu, 30 Aug 2018 09:15:49 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id A1FE41AE3A7A; Thu, 30 Aug 2018 17:16:01 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, benh@au1.ibm.com, torvalds@linux-foundation.org, npiggin@gmail.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org, Will Deacon Subject: [PATCH 09/12] asm-generic/tlb: Track which levels of the page tables have been cleared Date: Thu, 30 Aug 2018 17:15:43 +0100 Message-Id: <1535645747-9823-10-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1535645747-9823-1-git-send-email-will.deacon@arm.com> References: <1535645747-9823-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org It is common for architectures with hugepage support to require only a single TLB invalidation operation per hugepage during unmap(), rather than iterating through the mapping at a PAGE_SIZE increment. Currently, however, the level in the page table where the unmap() operation occurs is not stored in the mmu_gather structure, therefore forcing architectures to issue additional TLB invalidation operations or to give up and over-invalidate by e.g. invalidating the entire TLB. Ideally, we could add an interval rbtree to the mmu_gather structure, which would allow us to associate the correct mapping granule with the various sub-mappings within the range being invalidated. However, this is costly in terms of book-keeping and memory management, so instead we approximate by keeping track of the page table levels that are cleared and provide a means to query the smallest granule required for invalidation. Signed-off-by: Will Deacon --- include/asm-generic/tlb.h | 58 ++++++++++++++++++++++++++++++++++++++++------- mm/memory.c | 4 +++- 2 files changed, 53 insertions(+), 9 deletions(-) -- 2.1.4 Acked-by: Nicholas Piggin diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 2b444ad94566..9791e98122a0 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -116,6 +116,14 @@ struct mmu_gather { */ unsigned int freed_tables : 1; + /* + * at which levels have we cleared entries? + */ + unsigned int cleared_ptes : 1; + unsigned int cleared_pmds : 1; + unsigned int cleared_puds : 1; + unsigned int cleared_p4ds : 1; + struct mmu_gather_batch *active; struct mmu_gather_batch local; struct page *__pages[MMU_GATHER_BUNDLE]; @@ -150,6 +158,10 @@ static inline void __tlb_reset_range(struct mmu_gather *tlb) tlb->end = 0; } tlb->freed_tables = 0; + tlb->cleared_ptes = 0; + tlb->cleared_pmds = 0; + tlb->cleared_puds = 0; + tlb->cleared_p4ds = 0; } static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) @@ -199,6 +211,25 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, } #endif +static inline unsigned long tlb_get_unmap_shift(struct mmu_gather *tlb) +{ + if (tlb->cleared_ptes) + return PAGE_SHIFT; + if (tlb->cleared_pmds) + return PMD_SHIFT; + if (tlb->cleared_puds) + return PUD_SHIFT; + if (tlb->cleared_p4ds) + return P4D_SHIFT; + + return PAGE_SHIFT; +} + +static inline unsigned long tlb_get_unmap_size(struct mmu_gather *tlb) +{ + return 1UL << tlb_get_unmap_shift(tlb); +} + /* * In the case of tlb vma handling, we can optimise these away in the * case where we're doing a full MM flush. When we're doing a munmap, @@ -232,13 +263,19 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define tlb_remove_tlb_entry(tlb, ptep, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->cleared_ptes = 1; \ __tlb_remove_tlb_entry(tlb, ptep, address); \ } while (0) -#define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ - do { \ - __tlb_adjust_range(tlb, address, huge_page_size(h)); \ - __tlb_remove_tlb_entry(tlb, ptep, address); \ +#define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ + do { \ + unsigned long _sz = huge_page_size(h); \ + __tlb_adjust_range(tlb, address, _sz); \ + if (_sz == PMD_SIZE) \ + tlb->cleared_pmds = 1; \ + else if (_sz == PUD_SIZE) \ + tlb->cleared_puds = 1; \ + __tlb_remove_tlb_entry(tlb, ptep, address); \ } while (0) /** @@ -252,6 +289,7 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define tlb_remove_pmd_tlb_entry(tlb, pmdp, address) \ do { \ __tlb_adjust_range(tlb, address, HPAGE_PMD_SIZE); \ + tlb->cleared_pmds = 1; \ __tlb_remove_pmd_tlb_entry(tlb, pmdp, address); \ } while (0) @@ -266,6 +304,7 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define tlb_remove_pud_tlb_entry(tlb, pudp, address) \ do { \ __tlb_adjust_range(tlb, address, HPAGE_PUD_SIZE); \ + tlb->cleared_puds = 1; \ __tlb_remove_pud_tlb_entry(tlb, pudp, address); \ } while (0) @@ -291,7 +330,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define pte_free_tlb(tlb, ptep, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ - tlb->freed_tables = 1; \ + tlb->freed_tables = 1; \ + tlb->cleared_pmds = 1; \ __pte_free_tlb(tlb, ptep, address); \ } while (0) #endif @@ -300,7 +340,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define pmd_free_tlb(tlb, pmdp, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ - tlb->freed_tables = 1; \ + tlb->freed_tables = 1; \ + tlb->cleared_puds = 1; \ __pmd_free_tlb(tlb, pmdp, address); \ } while (0) #endif @@ -310,7 +351,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define pud_free_tlb(tlb, pudp, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ - tlb->freed_tables = 1; \ + tlb->freed_tables = 1; \ + tlb->cleared_p4ds = 1; \ __pud_free_tlb(tlb, pudp, address); \ } while (0) #endif @@ -321,7 +363,7 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define p4d_free_tlb(tlb, pudp, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ - tlb->freed_tables = 1; \ + tlb->freed_tables = 1; \ __p4d_free_tlb(tlb, pudp, address); \ } while (0) #endif diff --git a/mm/memory.c b/mm/memory.c index c467102a5cbc..9135f48e8d84 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -267,8 +267,10 @@ void arch_tlb_finish_mmu(struct mmu_gather *tlb, { struct mmu_gather_batch *batch, *next; - if (force) + if (force) { + __tlb_reset_range(tlb); __tlb_adjust_range(tlb, start, end - start); + } tlb_flush_mmu(tlb); From patchwork Thu Aug 30 16:15:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 145542 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp91601ljw; Thu, 30 Aug 2018 09:15:59 -0700 (PDT) X-Google-Smtp-Source: ANB0Vda9NVU6CQqcT1uO2qWRw55t3cDS6l7dJUqqkb8hcW8O63wfNNk1lW4IwCB56lQb2SEQK6Li X-Received: by 2002:a62:1456:: with SMTP id 83-v6mr11060969pfu.50.1535645758918; Thu, 30 Aug 2018 09:15:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535645758; cv=none; d=google.com; s=arc-20160816; b=D/K4La4zKli45fDPRwVXKWCBdH+yma9gWK+Tod3F2Kxsh+UW+aE3WIqLZKPjBQxfiO KhK532HOBhSs0tFLkhn+U2EaCc1bdIlrVr+usHHC0dKX088Tf2NzLDbAPtnk0iQoRW73 fSkYje0nKS7CZxsCOKS/Qs+ZUUGN/juvmphrRJ1En/M5ztYTzw4dT2JTOqGsSfUX/UZM t7KJz0pzwI7BbOrtDsk2Doh5FI2/YV3lUGGxcDPF6er64D479iNVTxZzpMhK+q6r8IVD VS9dPmYugiebAene1NEdiC2RG6X5kL95S2oZHaiPgnFuADTTDRr1KHcp7740O5SGxoRB l78Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=ksv87UoDxthcwe7X1NoH8+m+mtxsLpbGHAdZrue+4qE=; b=WnfEK2s/StdO+WFdQG0m3yCx0HNm9AujuxIFZ9WuINh9EHK8YG28ok+wDN0jO6WdO1 /Ucpqu/+7pMHNzAkw3IOdWCd5cb8qaICOZw101z09UH6V4v1MgQlfzsRG19whwj8BuxA 3O7fsOUzHT/4W5IZpNGw1evF0+TXvWqECvKxqRoe+t4LBBS+ODCNceU7QQ5UT84E1rGc 8X/yItmm3YnBLQBRAi7DNQIhXIR7+8p/LE67zMQcdn/xZ+ljYTxLBI+gVtk/vq+eGFqB CuIS2ZCxoJfMG+fe0MJGTZvPGrqPO6p5+6HzmRSgiuKg9n46XQro3+FqAZeGopgZJKkk 9xBQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id go14si6856704plb.458.2018.08.30.09.15.58; Thu, 30 Aug 2018 09:15:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727725AbeH3USn (ORCPT + 32 others); Thu, 30 Aug 2018 16:18:43 -0400 Received: from foss.arm.com ([217.140.101.70]:44902 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727659AbeH3USn (ORCPT ); Thu, 30 Aug 2018 16:18:43 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 178FE1BB2; Thu, 30 Aug 2018 09:15:50 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DD25C3F994; Thu, 30 Aug 2018 09:15:49 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id B45241AE3A87; Thu, 30 Aug 2018 17:16:01 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, benh@au1.ibm.com, torvalds@linux-foundation.org, npiggin@gmail.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org, Will Deacon Subject: [PATCH 10/12] arm64: tlb: Adjust stride and type of TLBI according to mmu_gather Date: Thu, 30 Aug 2018 17:15:44 +0100 Message-Id: <1535645747-9823-11-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1535645747-9823-1-git-send-email-will.deacon@arm.com> References: <1535645747-9823-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Now that the core mmu_gather code keeps track of both the levels of page table cleared and also whether or not these entries correspond to intermediate entries, we can use this in our tlb_flush() callback to reduce the number of invalidations we issue as well as their scope. Signed-off-by: Will Deacon --- arch/arm64/include/asm/tlb.h | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) -- 2.1.4 diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index bd00017d529a..b078fdec10d5 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -34,20 +34,21 @@ static void tlb_flush(struct mmu_gather *tlb); static inline void tlb_flush(struct mmu_gather *tlb) { struct vm_area_struct vma = TLB_FLUSH_VMA(tlb->mm, 0); + bool last_level = !tlb->freed_tables; + unsigned long stride = tlb_get_unmap_size(tlb); /* - * The ASID allocator will either invalidate the ASID or mark - * it as used. + * If we're tearing down the address space then we only care about + * invalidating the walk-cache, since the ASID allocator won't + * reallocate our ASID without invalidating the entire TLB. */ - if (tlb->fullmm) + if (tlb->fullmm) { + if (!last_level) + flush_tlb_mm(tlb->mm); return; + } - /* - * The intermediate page table levels are already handled by - * the __(pte|pmd|pud)_free_tlb() functions, so last level - * TLBI is sufficient here. - */ - __flush_tlb_range(&vma, tlb->start, tlb->end, PAGE_SIZE, true); + __flush_tlb_range(&vma, tlb->start, tlb->end, stride, last_level); } static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, From patchwork Thu Aug 30 16:15:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 145544 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp91766ljw; Thu, 30 Aug 2018 09:16:08 -0700 (PDT) X-Google-Smtp-Source: ANB0Vdam4wi+bDHkNEWFbmyUhuIMO+/jopQfHiE0j5Grz1tBr4LuIiiLhabhRcMZ89DStElFAGyV X-Received: by 2002:a63:4f64:: with SMTP id p36-v6mr10161586pgl.210.1535645768289; Thu, 30 Aug 2018 09:16:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535645768; cv=none; d=google.com; s=arc-20160816; b=P5AM17qLxVD98GvFnhwoonl7wOZd3TwOD+FvK5pUSHNR4wegEhW0+Taj0T8B8jXRYw YntEVgM4xOzMFVSs/zEgRE3HF3ahZWU4TTnUoZ1PsFrYIzs/QDsBNDQS6ysuedp8tf0L 8HpYcbp3ThDPNRPC4UMD+wArYbumUIGIKKxs8hTvZ1tz4k1DQvBm3Tc77jBPZT/M89s2 Czx7ayMVQXc2rol6hBU2B8x03DhfsCOTgW2i2N9dqtzEMfdZm+61JXOOJ7vHict9zQhe q0GXTtuEormUvlzQ9LiRX9H+y1JtpNgHET8fsgtV4bNKxaZQdBpTALXk1c+WRq7KkOVL Z9Ug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=fBg6/F4H5iyH9vEbZT1VOQAS0Ms2Vgwk2Y/MkAu5YsU=; b=UhhP5nSkfI/UC6PVJSwFd02g2FyrycziF7Ivh+7YUaqDQqgJYCgYvw91zMl7eB/80D QogGSiepB6we2Gdg0+a7YfxIu0tMZxecl0nX6XXJ8waH4eww3UrPFqt57BIscySF33yZ ufD0Tz4U9CMqt1m+NE2AoQjshh/BHp229lEGzwWPLxahNejKYEK4ZX457Rv/QG2K19SB 1pWPm0vzULLX4TUQdk4UpH52LPzQFCt0pFCDYHRQUdaVEziT5kn7p5zMweUnLkHkKva6 XRJJVblGpzF5Lo43v93yyojrUsnUQWOzqsgVkk0aaca5b2UdjAQkoY2EbHP55dYSbH8M eQtw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y185-v6si6792462pgb.219.2018.08.30.09.16.07; Thu, 30 Aug 2018 09:16:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727679AbeH3USn (ORCPT + 32 others); Thu, 30 Aug 2018 16:18:43 -0400 Received: from foss.arm.com ([217.140.101.70]:44904 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726122AbeH3USn (ORCPT ); Thu, 30 Aug 2018 16:18:43 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 204851BF7; Thu, 30 Aug 2018 09:15:50 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E650F3F557; Thu, 30 Aug 2018 09:15:49 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id C69BE1AE3A88; Thu, 30 Aug 2018 17:16:01 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, benh@au1.ibm.com, torvalds@linux-foundation.org, npiggin@gmail.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org, Will Deacon Subject: [PATCH 11/12] arm64: tlb: Avoid synchronous TLBIs when freeing page tables Date: Thu, 30 Aug 2018 17:15:45 +0100 Message-Id: <1535645747-9823-12-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1535645747-9823-1-git-send-email-will.deacon@arm.com> References: <1535645747-9823-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org By selecting HAVE_RCU_TABLE_INVALIDATE, we can rely on tlb_flush() being called if we fail to batch table pages for freeing. This in turn allows us to postpone walk-cache invalidation until tlb_finish_mmu(), which avoids lots of unnecessary DSBs and means we can shoot down the ASID if the range is large enough. Signed-off-by: Will Deacon --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/tlb.h | 3 --- arch/arm64/include/asm/tlbflush.h | 11 ----------- 3 files changed, 1 insertion(+), 14 deletions(-) -- 2.1.4 diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 29e75b47becd..89059ee1eccc 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -142,6 +142,7 @@ config ARM64 select HAVE_PERF_USER_STACK_DUMP select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RCU_TABLE_FREE + select HAVE_RCU_TABLE_INVALIDATE select HAVE_RSEQ select HAVE_STACKPROTECTOR select HAVE_SYSCALL_TRACEPOINTS diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index b078fdec10d5..106fdc951b6e 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -54,7 +54,6 @@ static inline void tlb_flush(struct mmu_gather *tlb) static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { - __flush_tlb_pgtable(tlb->mm, addr); pgtable_page_dtor(pte); tlb_remove_table(tlb, pte); } @@ -63,7 +62,6 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) { - __flush_tlb_pgtable(tlb->mm, addr); tlb_remove_table(tlb, virt_to_page(pmdp)); } #endif @@ -72,7 +70,6 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, unsigned long addr) { - __flush_tlb_pgtable(tlb->mm, addr); tlb_remove_table(tlb, virt_to_page(pudp)); } #endif diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 37ccdb246b20..c98ed8871030 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -215,17 +215,6 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end * Used to invalidate the TLB (walk caches) corresponding to intermediate page * table levels (pgd/pud/pmd). */ -static inline void __flush_tlb_pgtable(struct mm_struct *mm, - unsigned long uaddr) -{ - unsigned long addr = __TLBI_VADDR(uaddr, ASID(mm)); - - dsb(ishst); - __tlbi(vae1is, addr); - __tlbi_user(vae1is, addr); - dsb(ish); -} - static inline void __flush_tlb_kernel_pgtable(unsigned long kaddr) { unsigned long addr = __TLBI_VADDR(kaddr, 0); From patchwork Thu Aug 30 16:15:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 145543 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp91660ljw; Thu, 30 Aug 2018 09:16:03 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZEniIjVDcoHSoT5nnvbbnUX6rtBjCB1lj/TEiBFXqZX6pb6el2NAiiSzzAlfMd8cnQv2Y4 X-Received: by 2002:a63:31c2:: with SMTP id x185-v6mr10322918pgx.373.1535645763118; Thu, 30 Aug 2018 09:16:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535645763; cv=none; d=google.com; s=arc-20160816; b=I3VqqS3AWbDcT8Nz70w1HCsY5H3L/2drddFE42RbrfFiU8AlFy8N0iXDVx1ujrOjqw QIMw4zEoGfCBbV6foPQ0t+Pz4i0C0EAlG4oxK+A7NZ61uyLYgudRrVK+GMNZlGG5xRiW RhKIZBrl9f61wd1sIVr9TX8fSJLR8YxrYlHNmNGXNPnucuurd8QJj2n6ihN+Qpqs33pR 4rhjNy6Cyzkl+cnZcZ4pDivfjLVFyl/tR+YAEmxhfRRlHhqmkhslkU0mdA6Q/EirV/1u OJBMfQcGbwZu86541ed0bezBsCmT+o8N/z8VYPbiXZJ04CtH3mGE8hrGQGxkApFSyR/4 cqcA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=6mY0DfNranr6J1Z2LvofMaa02hUJHdmq+0pc4ooOY9o=; b=JEpywv51qDXH4J90x1c8/IkhuEUBJ2GusMx8w4vDK5QMk8acYgUSUmrf1h7rnNVGcg MoWSCqSLrxIPGFwQvLYR/LhnXeTWiNaVnf0aO++1NOq8jFytwH9TSc3rMltWm17DaCf4 aktXJWegTlvozHwvWSGFsZT2TjGP2TzS3rkzg996ZSHGkC+n2Dm+6fZkvBimrE4oYKru XmnKfsybob/Q9XTYoexZJJH764hFs4RKLQ8U7FmDGrpnAbS2CMtuuHpK0ZNDvyn+X/DF f22VsVUjNOjzYk5fzdEB1cr6seRt5MuicLJ8VEt74V6jcCvlW/fXv1F4CTUtPWDr2fPE GMiw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id go14si6856704plb.458.2018.08.30.09.16.02; Thu, 30 Aug 2018 09:16:03 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727703AbeH3USn (ORCPT + 32 others); Thu, 30 Aug 2018 16:18:43 -0400 Received: from foss.arm.com ([217.140.101.70]:44906 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727658AbeH3USn (ORCPT ); Thu, 30 Aug 2018 16:18:43 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 28EED1C01; Thu, 30 Aug 2018 09:15:50 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id EEE783F738; Thu, 30 Aug 2018 09:15:49 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id D8F4B1AE3A8C; Thu, 30 Aug 2018 17:16:01 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, benh@au1.ibm.com, torvalds@linux-foundation.org, npiggin@gmail.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org, Will Deacon Subject: [PATCH 12/12] arm64: tlb: Rewrite stale comment in asm/tlbflush.h Date: Thu, 30 Aug 2018 17:15:46 +0100 Message-Id: <1535645747-9823-13-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1535645747-9823-1-git-send-email-will.deacon@arm.com> References: <1535645747-9823-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Peter Z asked me to justify the barrier usage in asm/tlbflush.h, but actually that whole block comment needs to be rewritten. Reported-by: Peter Zijlstra Signed-off-by: Will Deacon --- arch/arm64/include/asm/tlbflush.h | 80 +++++++++++++++++++++++++++------------ 1 file changed, 55 insertions(+), 25 deletions(-) -- 2.1.4 diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index c98ed8871030..c3c0387aee18 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -70,43 +70,73 @@ }) /* - * TLB Management - * ============== + * TLB Invalidation + * ================ * - * The TLB specific code is expected to perform whatever tests it needs - * to determine if it should invalidate the TLB for each call. Start - * addresses are inclusive and end addresses are exclusive; it is safe to - * round these addresses down. + * This header file implements the low-level TLB invalidation routines + * (sometimes referred to as "flushing" in the kernel) for arm64. * - * flush_tlb_all() + * Every invalidation operation uses the following template: + * + * DSB ISHST // Ensure prior page-table updates have completed + * TLBI ... // Invalidate the TLB + * DSB ISH // Ensure the TLB invalidation has completed + * if (invalidated kernel mappings) + * ISB // Discard any instructions fetched from the old mapping + * + * + * The following functions form part of the "core" TLB invalidation API, + * as documented in Documentation/core-api/cachetlb.rst: * - * Invalidate the entire TLB. + * flush_tlb_all() + * Invalidate the entire TLB (kernel + user) on all CPUs * * flush_tlb_mm(mm) + * Invalidate an entire user address space on all CPUs. + * The 'mm' argument identifies the ASID to invalidate. + * + * flush_tlb_range(vma, start, end) + * Invalidate the virtual-address range '[start, end)' on all + * CPUs for the user address space corresponding to 'vma->mm'. + * Note that this operation also invalidates any walk-cache + * entries associated with translations for the specified address + * range. + * + * flush_tlb_kernel_range(start, end) + * Same as flush_tlb_range(..., start, end), but applies to + * kernel mappings rather than a particular user address space. + * Whilst not explicitly documented, this function is used when + * unmapping pages from vmalloc/io space. + * + * flush_tlb_page(vma, addr) + * Invalidate a single user mapping for address 'addr' in the + * address space corresponding to 'vma->mm'. Note that this + * operation only invalidates a single, last-level page-table + * entry and therefore does not affect any walk-caches. * - * Invalidate all TLB entries in a particular address space. - * - mm - mm_struct describing address space * - * flush_tlb_range(mm,start,end) + * Next, we have some undocumented invalidation routines that you probably + * don't want to call unless you know what you're doing: * - * Invalidate a range of TLB entries in the specified address - * space. - * - mm - mm_struct describing address space - * - start - start address (may not be aligned) - * - end - end address (exclusive, may not be aligned) + * local_flush_tlb_all() + * Same as flush_tlb_all(), but only applies to the calling CPU. * - * flush_tlb_page(vaddr,vma) + * __flush_tlb_kernel_pgtable(addr) + * Invalidate a single kernel mapping for address 'addr' on all + * CPUs, ensuring that any walk-cache entries associated with the + * translation are also invalidated. * - * Invalidate the specified page in the specified address range. - * - vaddr - virtual address (may not be aligned) - * - vma - vma_struct describing address range + * __flush_tlb_range(vma, start, end, stride, last_level) + * Invalidate the virtual-address range '[start, end)' on all + * CPUs for the user address space corresponding to 'vma->mm'. + * The invalidation operations are issued at a granularity + * determined by 'stride' and only affect any walk-cache entries + * if 'last_level' is equal to false. * - * flush_kern_tlb_page(kaddr) * - * Invalidate the TLB entry for the specified page. The address - * will be in the kernels virtual memory space. Current uses - * only require the D-TLB to be invalidated. - * - kaddr - Kernel virtual memory address + * Finally, take a look at asm/tlb.h to see how tlb_flush() is implemented + * on top of these routines, since that is our interface to the mmu_gather + * API as used by munmap() and friends. */ static inline void local_flush_tlb_all(void) {