From patchwork Fri Mar 28 15:01:30 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 27335 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ig0-f200.google.com (mail-ig0-f200.google.com [209.85.213.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 302AD20545 for ; Fri, 28 Mar 2014 16:20:46 +0000 (UTC) Received: by mail-ig0-f200.google.com with SMTP id l13sf2603071iga.11 for ; Fri, 28 Mar 2014 09:20:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:content-type:content-transfer-encoding; bh=fxgRg3nBNatFsB6cYJa1DUPzQ+Kq7PaMHZ5Pzhcddng=; b=cP23ozki0YcQVKJhLfiU14IQ+uwQ1MfLjJ/yawW9k8GiQiyt0Kk6U9SmRpAslcdGqx IuHXfc4HeMIoxQy+apE5OgmVIYsGEY/m34hDWk1gTdd5ezrQ1v9mRMhcfqFobkg+qnPr kSslMfFZElagPAiZiWFTXQckCecP1/DN4w9skweOqyic2qQ+cuqN/HJEpsMDZbpgyqKm tCNg6FkLl2efTwXWkV8oN7kW/He/9uOYIYbfRwLYgZY+/ebAiU2xGoTf5vYaY2ef1Mwu ntxIq0Uboftd8d+toYqIb+gybN4p5FBv4S/ts9wczjHVtPOt9G6MNpKK9yFu/4vC5EXW JBIQ== X-Gm-Message-State: ALoCoQmdqlRdZ8OQhVUy8zJr6Rpf2SDnyfEh36pDg+NbXmGGUR7C2bAs74nbZ/QalH1803NKA17L X-Received: by 10.182.34.169 with SMTP id a9mr724902obj.49.1396023645607; Fri, 28 Mar 2014 09:20:45 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.105.100 with SMTP id b91ls1481593qgf.26.gmail; Fri, 28 Mar 2014 09:20:45 -0700 (PDT) X-Received: by 10.59.7.102 with SMTP id db6mr8135587ved.17.1396023645497; Fri, 28 Mar 2014 09:20:45 -0700 (PDT) Received: from mail-ve0-f179.google.com (mail-ve0-f179.google.com [209.85.128.179]) by mx.google.com with ESMTPS id rx10si1320720vdc.132.2014.03.28.09.20.45 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 28 Mar 2014 09:20:45 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.179 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.179; Received: by mail-ve0-f179.google.com with SMTP id db12so5884097veb.24 for ; Fri, 28 Mar 2014 09:20:45 -0700 (PDT) X-Received: by 10.58.185.145 with SMTP id fc17mr8163799vec.14.1396023645399; Fri, 28 Mar 2014 09:20:45 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.12.8 with SMTP id v8csp21322vcv; Fri, 28 Mar 2014 09:20:44 -0700 (PDT) X-Received: by 10.180.187.237 with SMTP id fv13mr13840497wic.26.1396023644481; Fri, 28 Mar 2014 09:20:44 -0700 (PDT) Received: from casper.infradead.org (casper.infradead.org. [2001:770:15f::2]) by mx.google.com with ESMTPS id dq1si2548654wib.75.2014.03.28.09.20.43 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 28 Mar 2014 09:20:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:770:15f::2 as permitted sender) client-ip=2001:770:15f::2; Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WTYKc-0006Dz-8S; Fri, 28 Mar 2014 15:04:31 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WTYJu-0008JY-Vg; Fri, 28 Mar 2014 15:03:46 +0000 Received: from mail-wg0-f41.google.com ([74.125.82.41]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WTYIK-000877-VP for linux-arm-kernel@lists.infradead.org; Fri, 28 Mar 2014 15:02:32 +0000 Received: by mail-wg0-f41.google.com with SMTP id n12so3688509wgh.12 for ; Fri, 28 Mar 2014 08:01:50 -0700 (PDT) X-Received: by 10.180.211.239 with SMTP id nf15mr48490239wic.9.1396018908394; Fri, 28 Mar 2014 08:01:48 -0700 (PDT) Received: from marmot.wormnet.eu (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id fo6sm8038670wib.7.2014.03.28.08.01.47 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 28 Mar 2014 08:01:47 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, linux@arm.linux.org.uk, linux-mm@kvack.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V4 5/7] arm64: Convert asm/tlb.h to generic mmu_gather Date: Fri, 28 Mar 2014 15:01:30 +0000 Message-Id: <1396018892-6773-6-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1396018892-6773-1-git-send-email-steve.capper@linaro.org> References: <1396018892-6773-1-git-send-email-steve.capper@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140328_110209_302819_C25EE507 X-CRM114-Status: GOOD ( 19.55 ) X-Spam-Score: -2.6 (--) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-2.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [74.125.82.41 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: peterz@infradead.org, gary.robertson@linaro.org, akpm@linux-foundation.org, anders.roxell@linaro.org, Steve Capper X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: steve.capper@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.179 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 From: Catalin Marinas Over the past couple of years, the generic mmu_gather gained range tracking - 597e1c3580b7 (mm/mmu_gather: enable tlb flush range in generic mmu_gather), 2b047252d087 (Fix TLB gather virtual address range invalidation corner cases) - and tlb_fast_mode() has been removed - 29eb77825cc7 (arch, mm: Remove tlb_fast_mode()). The new mmu_gather structure is now suitable for arm64 and this patch converts the arch asm/tlb.h to the generic code. One functional difference is the shift_arg_pages() case where previously the code was flushing the full mm (no tlb_start_vma call) but now it flushes the range given to tlb_gather_mmu() (possibly slightly more efficient previously). Signed-off-by: Catalin Marinas Cc: Peter Zijlstra Signed-off-by: Steve Capper --- I think Catalin already has this patch in his upstream tree, it's included in this series for the sake of completeness. --- arch/arm64/include/asm/tlb.h | 136 +++++++------------------------------------ 1 file changed, 20 insertions(+), 116 deletions(-) diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index 717031a..72cadf5 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -19,115 +19,44 @@ #ifndef __ASM_TLB_H #define __ASM_TLB_H -#include -#include -#include -#include - -#define MMU_GATHER_BUNDLE 8 - -/* - * TLB handling. This allows us to remove pages from the page - * tables, and efficiently handle the TLB issues. - */ -struct mmu_gather { - struct mm_struct *mm; - unsigned int fullmm; - struct vm_area_struct *vma; - unsigned long start, end; - unsigned long range_start; - unsigned long range_end; - unsigned int nr; - unsigned int max; - struct page **pages; - struct page *local[MMU_GATHER_BUNDLE]; -}; +#include /* - * This is unnecessarily complex. There's three ways the TLB shootdown - * code is used: + * There's three ways the TLB shootdown code is used: * 1. Unmapping a range of vmas. See zap_page_range(), unmap_region(). * tlb->fullmm = 0, and tlb_start_vma/tlb_end_vma will be called. - * tlb->vma will be non-NULL. * 2. Unmapping all vmas. See exit_mmap(). * tlb->fullmm = 1, and tlb_start_vma/tlb_end_vma will be called. - * tlb->vma will be non-NULL. Additionally, page tables will be freed. + * Page tables will be freed. * 3. Unmapping argument pages. See shift_arg_pages(). * tlb->fullmm = 0, but tlb_start_vma/tlb_end_vma will not be called. - * tlb->vma will be NULL. */ static inline void tlb_flush(struct mmu_gather *tlb) { - if (tlb->fullmm || !tlb->vma) + if (tlb->fullmm) { flush_tlb_mm(tlb->mm); - else if (tlb->range_end > 0) { - flush_tlb_range(tlb->vma, tlb->range_start, tlb->range_end); - tlb->range_start = TASK_SIZE; - tlb->range_end = 0; + } else if (tlb->end > 0) { + struct vm_area_struct vma = { .vm_mm = tlb->mm, }; + flush_tlb_range(&vma, tlb->start, tlb->end); + tlb->start = TASK_SIZE; + tlb->end = 0; } } static inline void tlb_add_flush(struct mmu_gather *tlb, unsigned long addr) { if (!tlb->fullmm) { - if (addr < tlb->range_start) - tlb->range_start = addr; - if (addr + PAGE_SIZE > tlb->range_end) - tlb->range_end = addr + PAGE_SIZE; - } -} - -static inline void __tlb_alloc_page(struct mmu_gather *tlb) -{ - unsigned long addr = __get_free_pages(GFP_NOWAIT | __GFP_NOWARN, 0); - - if (addr) { - tlb->pages = (void *)addr; - tlb->max = PAGE_SIZE / sizeof(struct page *); + tlb->start = min(tlb->start, addr); + tlb->end = max(tlb->end, addr + PAGE_SIZE); } } -static inline void tlb_flush_mmu(struct mmu_gather *tlb) -{ - tlb_flush(tlb); - free_pages_and_swap_cache(tlb->pages, tlb->nr); - tlb->nr = 0; - if (tlb->pages == tlb->local) - __tlb_alloc_page(tlb); -} - -static inline void -tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end) -{ - tlb->mm = mm; - tlb->fullmm = !(start | (end+1)); - tlb->start = start; - tlb->end = end; - tlb->vma = NULL; - tlb->max = ARRAY_SIZE(tlb->local); - tlb->pages = tlb->local; - tlb->nr = 0; - __tlb_alloc_page(tlb); -} - -static inline void -tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end) -{ - tlb_flush_mmu(tlb); - - /* keep the page table cache within bounds */ - check_pgt_cache(); - - if (tlb->pages != tlb->local) - free_pages((unsigned long)tlb->pages, 0); -} - /* * Memorize the range for the TLB flush. */ -static inline void -tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, unsigned long addr) +static inline void __tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, + unsigned long addr) { tlb_add_flush(tlb, addr); } @@ -137,38 +66,24 @@ tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, unsigned long addr) * case where we're doing a full MM flush. When we're doing a munmap, * the vmas are adjusted to only cover the region to be torn down. */ -static inline void -tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) +static inline void tlb_start_vma(struct mmu_gather *tlb, + struct vm_area_struct *vma) { if (!tlb->fullmm) { - tlb->vma = vma; - tlb->range_start = TASK_SIZE; - tlb->range_end = 0; + tlb->start = TASK_SIZE; + tlb->end = 0; } } -static inline void -tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) +static inline void tlb_end_vma(struct mmu_gather *tlb, + struct vm_area_struct *vma) { if (!tlb->fullmm) tlb_flush(tlb); } -static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - tlb->pages[tlb->nr++] = page; - VM_BUG_ON(tlb->nr > tlb->max); - return tlb->max - tlb->nr; -} - -static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - if (!__tlb_remove_page(tlb, page)) - tlb_flush_mmu(tlb); -} - static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, - unsigned long addr) + unsigned long addr) { pgtable_page_dtor(pte); tlb_add_flush(tlb, addr); @@ -184,16 +99,5 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, } #endif -#define pte_free_tlb(tlb, ptep, addr) __pte_free_tlb(tlb, ptep, addr) -#define pmd_free_tlb(tlb, pmdp, addr) __pmd_free_tlb(tlb, pmdp, addr) -#define pud_free_tlb(tlb, pudp, addr) pud_free((tlb)->mm, pudp) - -#define tlb_migrate_finish(mm) do { } while (0) - -static inline void -tlb_remove_pmd_tlb_entry(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) -{ - tlb_add_flush(tlb, addr); -} #endif