From patchwork Wed Apr 15 15:34:19 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 47206 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f197.google.com (mail-wi0-f197.google.com [209.85.212.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 682CA2121F for ; Wed, 15 Apr 2015 15:40:35 +0000 (UTC) Received: by wizk4 with SMTP id k4sf12542214wiz.2 for ; Wed, 15 Apr 2015 08:40:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=PlCtgryoynwHzwV1PKlgsHelUNy/Q5iqBE0ehw2oAnw=; b=lXKl21hlFTBKjcS/hzgjoKmdvvOVAoz56+axRMBcsA0l29L9ZAO2y+EqFIkMasi9Ge ZYFLriMlrHla/4ab22p1ny8ZvJFt7L7eWm9zL3yE5I8MyXXaxBPSiCt3FOvcC8++mBqc D8YYy4M8c6RZA9CMDWMVN/7RLFKcK7AE2mkC8ajJXJiloMyNzHrZhifmh/qCDh0owdmD 4q6281vYfIOuT9H6e8qDCRa9vBJHnRO5piJByT1HvIAMnTlSxJ1mY+UsEL5/xCVBIDUb 7qr4DIxYF8kXQX9Ds+OT5x4tlhE+Zq5yD7Nv0lCSHmS9BAsoi0aEvP10cEIwox2686n8 nGIA== X-Gm-Message-State: ALoCoQnfGB4cl2szj6GXC2yS84jWC+ZilYCGLyx50Da8YhZpb8pMKmVl+B+aQj3XDGWX8/rD/hf8 X-Received: by 10.112.171.41 with SMTP id ar9mr5507010lbc.24.1429112434738; Wed, 15 Apr 2015 08:40:34 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.115.240 with SMTP id jr16ls220062lab.69.gmail; Wed, 15 Apr 2015 08:40:34 -0700 (PDT) X-Received: by 10.112.218.40 with SMTP id pd8mr24105166lbc.71.1429112434386; Wed, 15 Apr 2015 08:40:34 -0700 (PDT) Received: from mail-la0-f52.google.com (mail-la0-f52.google.com. [209.85.215.52]) by mx.google.com with ESMTPS id yj8si4185651lab.53.2015.04.15.08.40.34 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 15 Apr 2015 08:40:34 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.52 as permitted sender) client-ip=209.85.215.52; Received: by laat2 with SMTP id t2so35835137laa.1 for ; Wed, 15 Apr 2015 08:40:34 -0700 (PDT) X-Received: by 10.152.26.34 with SMTP id i2mr24034618lag.117.1429112434206; Wed, 15 Apr 2015 08:40:34 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp2593231lbt; Wed, 15 Apr 2015 08:40:33 -0700 (PDT) X-Received: by 10.66.66.7 with SMTP id b7mr48244600pat.9.1429112432352; Wed, 15 Apr 2015 08:40:32 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id mb4si7558665pdb.224.2015.04.15.08.40.31 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 15 Apr 2015 08:40:32 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YiPOV-0002Qx-IU; Wed, 15 Apr 2015 15:38:27 +0000 Received: from mail-wg0-f53.google.com ([74.125.82.53]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YiPMK-00012F-1c for linux-arm-kernel@lists.infradead.org; Wed, 15 Apr 2015 15:36:14 +0000 Received: by wgsk9 with SMTP id k9so51060525wgs.3 for ; Wed, 15 Apr 2015 08:35:50 -0700 (PDT) X-Received: by 10.180.19.166 with SMTP id g6mr9359430wie.56.1429112150531; Wed, 15 Apr 2015 08:35:50 -0700 (PDT) Received: from ards-macbook-pro.local ([90.174.5.175]) by mx.google.com with ESMTPSA id eh5sm7674765wic.20.2015.04.15.08.35.40 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 15 Apr 2015 08:35:49 -0700 (PDT) From: Ard Biesheuvel To: mark.rutland@arm.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 08/13] arm64: split off early mapping code from early_fixmap_init() Date: Wed, 15 Apr 2015 17:34:19 +0200 Message-Id: <1429112064-19952-9-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1429112064-19952-1-git-send-email-ard.biesheuvel@linaro.org> References: <1429112064-19952-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150415_083612_281907_9B01978D X-CRM114-Status: GOOD ( 14.38 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [74.125.82.53 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [74.125.82.53 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: Ard Biesheuvel X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.52 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 This splits off and generalises the population of the statically allocated fixmap page tables so that we may reuse it later for the linear mapping once we move the kernel text mapping out of it. This also involves taking into account that table entries at any of the levels we are populating may have been populated already, since the fixmap mapping might not be disjoint up to the pgd level anymore from other early mappings. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/compiler.h | 2 ++ arch/arm64/kernel/vmlinux.lds.S | 12 ++++---- arch/arm64/mm/mmu.c | 60 +++++++++++++++++++++++++++------------ 3 files changed, 51 insertions(+), 23 deletions(-) diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h index ee35fd0f2236..dd342af63673 100644 --- a/arch/arm64/include/asm/compiler.h +++ b/arch/arm64/include/asm/compiler.h @@ -27,4 +27,6 @@ */ #define __asmeq(x, y) ".ifnc " x "," y " ; .err ; .endif\n\t" +#define __pgdir __attribute__((section(".pgdir"),aligned(PAGE_SIZE))) + #endif /* __ASM_COMPILER_H */ diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 98073332e2d0..ceec4def354b 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -160,11 +160,13 @@ SECTIONS BSS_SECTION(0, 0, 0) - . = ALIGN(PAGE_SIZE); - idmap_pg_dir = .; - . += IDMAP_DIR_SIZE; - swapper_pg_dir = .; - . += SWAPPER_DIR_SIZE; + .pgdir (NOLOAD) : ALIGN(PAGE_SIZE) { + idmap_pg_dir = .; + . += IDMAP_DIR_SIZE; + swapper_pg_dir = .; + . += SWAPPER_DIR_SIZE; + *(.pgdir) + } _end = .; diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index aa99b7a0d660..c27ab20a5ba9 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -342,6 +342,44 @@ static void __init __map_memblock(phys_addr_t start, phys_addr_t end) } #endif +struct bootstrap_pgtables { + pte_t pte[PTRS_PER_PTE]; + pmd_t pmd[PTRS_PER_PMD > 1 ? PTRS_PER_PMD : 0]; + pud_t pud[PTRS_PER_PUD > 1 ? PTRS_PER_PUD : 0]; +}; + +static void __init bootstrap_early_mapping(unsigned long addr, + struct bootstrap_pgtables *reg, + bool pte_level) +{ + pgd_t *pgd; + pud_t *pud; + pmd_t *pmd; + + pgd = pgd_offset_k(addr); + if (pgd_none(*pgd)) { + clear_page(reg->pud); + memblock_reserve(__pa(reg->pud), PAGE_SIZE); + pgd_populate(&init_mm, pgd, reg->pud); + } + pud = pud_offset(pgd, addr); + if (pud_none(*pud)) { + clear_page(reg->pmd); + memblock_reserve(__pa(reg->pmd), PAGE_SIZE); + pud_populate(&init_mm, pud, reg->pmd); + } + + if (!pte_level) + return; + + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd)) { + clear_page(reg->pte); + memblock_reserve(__pa(reg->pte), PAGE_SIZE); + pmd_populate_kernel(&init_mm, pmd, reg->pte); + } +} + static void __init map_mem(void) { struct memblock_region *reg; @@ -555,14 +593,6 @@ void vmemmap_free(unsigned long start, unsigned long end) } #endif /* CONFIG_SPARSEMEM_VMEMMAP */ -static pte_t bm_pte[PTRS_PER_PTE] __page_aligned_bss; -#if CONFIG_ARM64_PGTABLE_LEVELS > 2 -static pmd_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss; -#endif -#if CONFIG_ARM64_PGTABLE_LEVELS > 3 -static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss; -#endif - static inline pud_t * fixmap_pud(unsigned long addr) { pgd_t *pgd = pgd_offset_k(addr); @@ -592,21 +622,15 @@ static inline pte_t * fixmap_pte(unsigned long addr) void __init early_fixmap_init(void) { - pgd_t *pgd; - pud_t *pud; + static struct bootstrap_pgtables fixmap_bs_pgtables __pgdir; pmd_t *pmd; - unsigned long addr = FIXADDR_START; - pgd = pgd_offset_k(addr); - pgd_populate(&init_mm, pgd, bm_pud); - pud = pud_offset(pgd, addr); - pud_populate(&init_mm, pud, bm_pmd); - pmd = pmd_offset(pud, addr); - pmd_populate_kernel(&init_mm, pmd, bm_pte); + bootstrap_early_mapping(FIXADDR_START, &fixmap_bs_pgtables, true); + pmd = fixmap_pmd(FIXADDR_START); /* * The boot-ioremap range spans multiple pmds, for which - * we are not preparted: + * we are not prepared: */ BUILD_BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT) != (__fix_to_virt(FIX_BTMAP_END) >> PMD_SHIFT));