From patchwork Wed Mar 12 13:40:21 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 26109 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ob0-f200.google.com (mail-ob0-f200.google.com [209.85.214.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 1D142236AC for ; Wed, 12 Mar 2014 13:40:56 +0000 (UTC) Received: by mail-ob0-f200.google.com with SMTP id gq1sf38898371obb.7 for ; Wed, 12 Mar 2014 06:40:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=UKsYhO9Vh7bEsuaG5MywmIH93ExXMDb6RQWRCeomtpo=; b=VXj2EftPJljT8pA70cFeaWB5J2Yh2gqbOg+IbZLy9eqOuGfISG58BBd+H9zYI7bBcG NUThyTqLlA7nvRlPi3d560USq2iMxCpGMEHb033o4wGAtElVA9pD7AILaqEN2MfimpzW iREqNJmSxG9UePE1rvlYPWyIvc5AtYYzr0ku5cIw/PpqEiUzfW21ixlLXA8jdY5dVIci vOyViLsltlcJfdXHEg8OHdr6CpbB6lX3Zzv+2WhCV0pQhRuZyJTlTQ4onWphjpYodP9o AmQHavXBmluXph1lyP6K/4kkze+cBh0XSxeA5I25P8L3ypzQgVUDgNwpInoyqFShns+d qONw== X-Gm-Message-State: ALoCoQn3dA13CwtbLj166oB6olKF0aH7FlWEg1Ao8n+Y5xGcevLGLp5FWg8gB+Rm2E02bU4qXzOR X-Received: by 10.182.34.169 with SMTP id a9mr5878696obj.49.1394631655724; Wed, 12 Mar 2014 06:40:55 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.92.85 with SMTP id a79ls1496145qge.72.gmail; Wed, 12 Mar 2014 06:40:55 -0700 (PDT) X-Received: by 10.58.201.5 with SMTP id jw5mr31415079vec.6.1394631655544; Wed, 12 Mar 2014 06:40:55 -0700 (PDT) Received: from mail-ve0-f173.google.com (mail-ve0-f173.google.com [209.85.128.173]) by mx.google.com with ESMTPS id tr5si7469856vdc.130.2014.03.12.06.40.55 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 12 Mar 2014 06:40:55 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.173 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.173; Received: by mail-ve0-f173.google.com with SMTP id oy12so9908225veb.4 for ; Wed, 12 Mar 2014 06:40:55 -0700 (PDT) X-Received: by 10.58.238.35 with SMTP id vh3mr32419047vec.16.1394631655448; Wed, 12 Mar 2014 06:40:55 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.78.9 with SMTP id i9csp289412vck; Wed, 12 Mar 2014 06:40:54 -0700 (PDT) X-Received: by 10.194.219.132 with SMTP id po4mr42159965wjc.7.1394631654356; Wed, 12 Mar 2014 06:40:54 -0700 (PDT) Received: from mail-wi0-f170.google.com (mail-wi0-f170.google.com [209.85.212.170]) by mx.google.com with ESMTPS id l14si14515368wjq.66.2014.03.12.06.40.53 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 12 Mar 2014 06:40:54 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.212.170 is neither permitted nor denied by best guess record for domain of steve.capper@linaro.org) client-ip=209.85.212.170; Received: by mail-wi0-f170.google.com with SMTP id n15so2190252wiw.5 for ; Wed, 12 Mar 2014 06:40:53 -0700 (PDT) X-Received: by 10.180.37.12 with SMTP id u12mr7555199wij.0.1394631653695; Wed, 12 Mar 2014 06:40:53 -0700 (PDT) Received: from marmot.wormnet.eu (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id j8sm11095886wjn.13.2014.03.12.06.40.52 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Mar 2014 06:40:53 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Cc: will.deacon@arm.com, catalin.marinas@arm.com, linux@arm.linux.org.uk, chanho61.park@samsung.com, zishen.lim@linaro.org, patches@linaro.org, gary.robertson@linaro.org, michael.hudson@linaro.org, christoffer.dall@linaro.org, peterz@infradead.org, Steve Capper Subject: [RFC PATCH V3 4/6] arm64: Convert asm/tlb.h to generic mmu_gather Date: Wed, 12 Mar 2014 13:40:21 +0000 Message-Id: <1394631623-17883-5-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1394631623-17883-1-git-send-email-steve.capper@linaro.org> References: <1394631623-17883-1-git-send-email-steve.capper@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: steve.capper@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.173 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Catalin Marinas Over the past couple of years, the generic mmu_gather gained range tracking - 597e1c3580b7 (mm/mmu_gather: enable tlb flush range in generic mmu_gather), 2b047252d087 (Fix TLB gather virtual address range invalidation corner cases) - and tlb_fast_mode() has been removed - 29eb77825cc7 (arch, mm: Remove tlb_fast_mode()). The new mmu_gather structure is now suitable for arm64 and this patch converts the arch asm/tlb.h to the generic code. One functional difference is the shift_arg_pages() case where previously the code was flushing the full mm (no tlb_start_vma call) but now it flushes the range given to tlb_gather_mmu() (possibly slightly more efficient previously). Signed-off-by: Catalin Marinas Cc: Peter Zijlstra Signed-off-by: Steve Capper --- arch/arm64/include/asm/tlb.h | 136 +++++++------------------------------------ 1 file changed, 20 insertions(+), 116 deletions(-) diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index 717031a..72cadf5 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -19,115 +19,44 @@ #ifndef __ASM_TLB_H #define __ASM_TLB_H -#include -#include -#include -#include - -#define MMU_GATHER_BUNDLE 8 - -/* - * TLB handling. This allows us to remove pages from the page - * tables, and efficiently handle the TLB issues. - */ -struct mmu_gather { - struct mm_struct *mm; - unsigned int fullmm; - struct vm_area_struct *vma; - unsigned long start, end; - unsigned long range_start; - unsigned long range_end; - unsigned int nr; - unsigned int max; - struct page **pages; - struct page *local[MMU_GATHER_BUNDLE]; -}; +#include /* - * This is unnecessarily complex. There's three ways the TLB shootdown - * code is used: + * There's three ways the TLB shootdown code is used: * 1. Unmapping a range of vmas. See zap_page_range(), unmap_region(). * tlb->fullmm = 0, and tlb_start_vma/tlb_end_vma will be called. - * tlb->vma will be non-NULL. * 2. Unmapping all vmas. See exit_mmap(). * tlb->fullmm = 1, and tlb_start_vma/tlb_end_vma will be called. - * tlb->vma will be non-NULL. Additionally, page tables will be freed. + * Page tables will be freed. * 3. Unmapping argument pages. See shift_arg_pages(). * tlb->fullmm = 0, but tlb_start_vma/tlb_end_vma will not be called. - * tlb->vma will be NULL. */ static inline void tlb_flush(struct mmu_gather *tlb) { - if (tlb->fullmm || !tlb->vma) + if (tlb->fullmm) { flush_tlb_mm(tlb->mm); - else if (tlb->range_end > 0) { - flush_tlb_range(tlb->vma, tlb->range_start, tlb->range_end); - tlb->range_start = TASK_SIZE; - tlb->range_end = 0; + } else if (tlb->end > 0) { + struct vm_area_struct vma = { .vm_mm = tlb->mm, }; + flush_tlb_range(&vma, tlb->start, tlb->end); + tlb->start = TASK_SIZE; + tlb->end = 0; } } static inline void tlb_add_flush(struct mmu_gather *tlb, unsigned long addr) { if (!tlb->fullmm) { - if (addr < tlb->range_start) - tlb->range_start = addr; - if (addr + PAGE_SIZE > tlb->range_end) - tlb->range_end = addr + PAGE_SIZE; - } -} - -static inline void __tlb_alloc_page(struct mmu_gather *tlb) -{ - unsigned long addr = __get_free_pages(GFP_NOWAIT | __GFP_NOWARN, 0); - - if (addr) { - tlb->pages = (void *)addr; - tlb->max = PAGE_SIZE / sizeof(struct page *); + tlb->start = min(tlb->start, addr); + tlb->end = max(tlb->end, addr + PAGE_SIZE); } } -static inline void tlb_flush_mmu(struct mmu_gather *tlb) -{ - tlb_flush(tlb); - free_pages_and_swap_cache(tlb->pages, tlb->nr); - tlb->nr = 0; - if (tlb->pages == tlb->local) - __tlb_alloc_page(tlb); -} - -static inline void -tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end) -{ - tlb->mm = mm; - tlb->fullmm = !(start | (end+1)); - tlb->start = start; - tlb->end = end; - tlb->vma = NULL; - tlb->max = ARRAY_SIZE(tlb->local); - tlb->pages = tlb->local; - tlb->nr = 0; - __tlb_alloc_page(tlb); -} - -static inline void -tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end) -{ - tlb_flush_mmu(tlb); - - /* keep the page table cache within bounds */ - check_pgt_cache(); - - if (tlb->pages != tlb->local) - free_pages((unsigned long)tlb->pages, 0); -} - /* * Memorize the range for the TLB flush. */ -static inline void -tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, unsigned long addr) +static inline void __tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, + unsigned long addr) { tlb_add_flush(tlb, addr); } @@ -137,38 +66,24 @@ tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, unsigned long addr) * case where we're doing a full MM flush. When we're doing a munmap, * the vmas are adjusted to only cover the region to be torn down. */ -static inline void -tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) +static inline void tlb_start_vma(struct mmu_gather *tlb, + struct vm_area_struct *vma) { if (!tlb->fullmm) { - tlb->vma = vma; - tlb->range_start = TASK_SIZE; - tlb->range_end = 0; + tlb->start = TASK_SIZE; + tlb->end = 0; } } -static inline void -tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) +static inline void tlb_end_vma(struct mmu_gather *tlb, + struct vm_area_struct *vma) { if (!tlb->fullmm) tlb_flush(tlb); } -static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - tlb->pages[tlb->nr++] = page; - VM_BUG_ON(tlb->nr > tlb->max); - return tlb->max - tlb->nr; -} - -static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - if (!__tlb_remove_page(tlb, page)) - tlb_flush_mmu(tlb); -} - static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, - unsigned long addr) + unsigned long addr) { pgtable_page_dtor(pte); tlb_add_flush(tlb, addr); @@ -184,16 +99,5 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, } #endif -#define pte_free_tlb(tlb, ptep, addr) __pte_free_tlb(tlb, ptep, addr) -#define pmd_free_tlb(tlb, pmdp, addr) __pmd_free_tlb(tlb, pmdp, addr) -#define pud_free_tlb(tlb, pudp, addr) pud_free((tlb)->mm, pudp) - -#define tlb_migrate_finish(mm) do { } while (0) - -static inline void -tlb_remove_pmd_tlb_entry(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) -{ - tlb_add_flush(tlb, addr); -} #endif