From patchwork Tue May 6 15:30:06 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 29718 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-vc0-f200.google.com (mail-vc0-f200.google.com [209.85.220.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id B003A20534 for ; Tue, 6 May 2014 15:32:58 +0000 (UTC) Received: by mail-vc0-f200.google.com with SMTP id lc6sf2681978vcb.11 for ; Tue, 06 May 2014 08:32:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:content-type:content-transfer-encoding; bh=icD0n6L8PHIJJ+D2Zx7wl7z7zfx0x+ieOjbYaf554K8=; b=HAKB2rLAdjKaaIEiqo3NtFi4EQrAxGK/OHNUxoUHL1lZ0OxhAdRqkDPDpqUZl3ZwMB LXeJ6Kdm9gJxiIRvHbrAdrSvN0TlANE53NLElSshDGvKkbAiVBJX7BdmASXDXG0pEBHp 8wiZ/5mkjRZz048HFEXRxHvI+BdtsaLhxtSFyVkIY+K7sXMYkCTQC75R97CAUQwwCgkK 1Up0224g3dVnOmXupXXqI5X24dJppSlwxf3kYkw6XUAUqToFgWeBXz8m5fGYhO34YYlP tcRwmH5MRyClZ5cyHgoxiCGiUZU08xU/uFaeSyWYy3DFiGIAsfXgwP4EVeygxmpbnJd1 93Kg== X-Gm-Message-State: ALoCoQnACHtIWvnREqLucYNfpYerbVz/+Vm6wuc2+o8gVya1LNALGTrK3sDWVvMuYiE6QXA+Nwh/ X-Received: by 10.58.195.202 with SMTP id ig10mr1419956vec.38.1399390378482; Tue, 06 May 2014 08:32:58 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.19.9 with SMTP id 9ls2914140qgg.75.gmail; Tue, 06 May 2014 08:32:58 -0700 (PDT) X-Received: by 10.52.3.129 with SMTP id c1mr1453464vdc.37.1399390378319; Tue, 06 May 2014 08:32:58 -0700 (PDT) Received: from mail-vc0-f178.google.com (mail-vc0-f178.google.com [209.85.220.178]) by mx.google.com with ESMTPS id sw4si2383332vdc.48.2014.05.06.08.32.58 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 06 May 2014 08:32:58 -0700 (PDT) Received-SPF: none (google.com: patch+caf_=patchwork-forward=linaro.org@linaro.org does not designate permitted sender hosts) client-ip=209.85.220.178; Received: by mail-vc0-f178.google.com with SMTP id hu19so1170175vcb.37 for ; Tue, 06 May 2014 08:32:58 -0700 (PDT) X-Received: by 10.58.120.46 with SMTP id kz14mr2531188veb.25.1399390378211; Tue, 06 May 2014 08:32:58 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp233215vcb; Tue, 6 May 2014 08:32:56 -0700 (PDT) X-Received: by 10.224.93.16 with SMTP id t16mr29028917qam.82.1399390373686; Tue, 06 May 2014 08:32:53 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id n7si5280760qas.79.2014.05.06.08.32.52 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 06 May 2014 08:32:53 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WhhL3-0000Nz-4F; Tue, 06 May 2014 15:31:25 +0000 Received: from mail-wi0-f180.google.com ([209.85.212.180]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WhhKW-0008Fr-3c for linux-arm-kernel@lists.infradead.org; Tue, 06 May 2014 15:30:54 +0000 Received: by mail-wi0-f180.google.com with SMTP id hi2so1162147wib.7 for ; Tue, 06 May 2014 08:30:30 -0700 (PDT) X-Received: by 10.180.91.161 with SMTP id cf1mr21398299wib.49.1399390230088; Tue, 06 May 2014 08:30:30 -0700 (PDT) Received: from marmot.wormnet.eu (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id ho2sm2330106wib.15.2014.05.06.08.30.28 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 06 May 2014 08:30:29 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, linux@arm.linux.org.uk, linux-arch@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH V5 3/6] arm: mm: Enable HAVE_RCU_TABLE_FREE logic Date: Tue, 6 May 2014 16:30:06 +0100 Message-Id: <1399390209-1756-4-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1399390209-1756-1-git-send-email-steve.capper@linaro.org> References: <1399390209-1756-1-git-send-email-steve.capper@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140506_083052_346087_B05EA939 X-CRM114-Status: GOOD ( 16.47 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.3.2 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.212.180 listed in list.dnswl.org] Cc: anders.roxell@linaro.org, peterz@infradead.org, gary.robertson@linaro.org, will.deacon@arm.com, Steve Capper , akpm@linux-foundation.org, christoffer.dall@linaro.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: steve.capper@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: patch+caf_=patchwork-forward=linaro.org@linaro.org does not designate permitted sender hosts) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 In order to implement fast_get_user_pages we need to ensure that the page table walker is protected from page table pages being freed from under it. One way to achieve this is to have the walker disable interrupts, and rely on IPIs from the TLB flushing code blocking before the page table pages are freed. On some ARM platforms we have hardware TLB invalidation, thus the TLB flushing code won't necessarily broadcast IPIs. Also spuriously broadcasting IPIs can hurt system performance if done too often. This problem has already been solved on PowerPC and Sparc by batching up page table pages belonging to more than one mm_user, then scheduling an rcu_sched callback to free the pages. If one were to disable interrupts, that would delay the scheduling interrupts thus block the page table pages being freed. This logic has also been promoted to core code and is activated when one enables HAVE_RCU_TABLE_FREE. This patch enables HAVE_RCU_TABLE_FREE and incorporates it into the existing ARM TLB logic. Signed-off-by: Steve Capper --- arch/arm/Kconfig | 1 + arch/arm/include/asm/tlb.h | 38 ++++++++++++++++++++++++++++++++++++-- 2 files changed, 37 insertions(+), 2 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index db3c541..6cfdb3b 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -59,6 +59,7 @@ config ARM select HAVE_PERF_EVENTS select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP + select HAVE_RCU_TABLE_FREE if (SMP && CPU_V7) select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_SYSCALL_TRACEPOINTS select HAVE_UID16 diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index f1a0dac..3cadb72 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -35,12 +35,39 @@ #define MMU_GATHER_BUNDLE 8 +#ifdef CONFIG_HAVE_RCU_TABLE_FREE +static inline void __tlb_remove_table(void *_table) +{ + free_page_and_swap_cache((struct page *)_table); +} + +struct mmu_table_batch { + struct rcu_head rcu; + unsigned int nr; + void *tables[0]; +}; + +#define MAX_TABLE_BATCH \ + ((PAGE_SIZE - sizeof(struct mmu_table_batch)) / sizeof(void *)) + +extern void tlb_table_flush(struct mmu_gather *tlb); +extern void tlb_remove_table(struct mmu_gather *tlb, void *table); + +#define tlb_remove_entry(tlb, entry) tlb_remove_table(tlb, entry) +#else +#define tlb_remove_entry(tlb, entry) tlb_remove_page(tlb, entry) +#endif /* CONFIG_HAVE_RCU_TABLE_FREE */ + /* * TLB handling. This allows us to remove pages from the page * tables, and efficiently handle the TLB issues. */ struct mmu_gather { struct mm_struct *mm; +#ifdef CONFIG_HAVE_RCU_TABLE_FREE + struct mmu_table_batch *batch; + unsigned int need_flush; +#endif unsigned int fullmm; struct vm_area_struct *vma; unsigned long start, end; @@ -101,6 +128,9 @@ static inline void __tlb_alloc_page(struct mmu_gather *tlb) static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) { tlb_flush(tlb); +#ifdef CONFIG_HAVE_RCU_TABLE_FREE + tlb_table_flush(tlb); +#endif } static inline void tlb_flush_mmu_free(struct mmu_gather *tlb) @@ -129,6 +159,10 @@ tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start tlb->pages = tlb->local; tlb->nr = 0; __tlb_alloc_page(tlb); + +#ifdef CONFIG_HAVE_RCU_TABLE_FREE + tlb->batch = NULL; +#endif } static inline void @@ -205,7 +239,7 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, tlb_add_flush(tlb, addr + SZ_1M); #endif - tlb_remove_page(tlb, pte); + tlb_remove_entry(tlb, pte); } static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, @@ -213,7 +247,7 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, { #ifdef CONFIG_ARM_LPAE tlb_add_flush(tlb, addr); - tlb_remove_page(tlb, virt_to_page(pmdp)); + tlb_remove_entry(tlb, virt_to_page(pmdp)); #endif }