From patchwork Wed Sep 23 00:37:40 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 54016 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f72.google.com (mail-la0-f72.google.com [209.85.215.72]) by patches.linaro.org (Postfix) with ESMTPS id EC2F222D91 for ; Wed, 23 Sep 2015 00:41:17 +0000 (UTC) Received: by lagj9 with SMTP id j9sf15166105lag.0 for ; Tue, 22 Sep 2015 17:41:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:cc:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=l2TwD96oczpo9JEl7EImL0FfXmgGIDKiJYUAdXwaFPo=; b=DvwbaP44aS37PjotLCtPYmfyau2HzvSb5v6ZmPCYbqK/ts4Zv1a221F3/jgqh2nMz5 eCONJRd/u9hvUy6xvKLOHdrl3YRWvROIJGkMjT6otHnGSyJwMsnl2bm6lGtoB5LVHKPI 6uXOIV/+lg5t16qEiPxSN4kLb4yrqfXK3pYa0MxUGD4POza4oAnAtwAHVQifzuYzw2nJ LmxOQt7ziu1KOnS2uSnrJQxeQDY6gtC1frcKGAK4knNTd9AL6jJ7cFrjxAe+HrZC9WSI x6OVewW3O3YYG5KeLHJFLI3sNAM3ZyyzhCGK/iJbWtED+gDf9w74q21Fh9gVWhrAU1e/ O9Bw== X-Gm-Message-State: ALoCoQnWh3eh85XqbXWFQaabnqpy8cuoK054//qgXFXXaJcIhhEQQuejfZqcra+lERLHVCefAY24 X-Received: by 10.180.72.68 with SMTP id b4mr94933wiv.2.1442968876851; Tue, 22 Sep 2015 17:41:16 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.25.88.67 with SMTP id m64ls87832lfb.103.gmail; Tue, 22 Sep 2015 17:41:16 -0700 (PDT) X-Received: by 10.112.52.168 with SMTP id u8mr10577972lbo.48.1442968876573; Tue, 22 Sep 2015 17:41:16 -0700 (PDT) Received: from mail-la0-f49.google.com (mail-la0-f49.google.com. [209.85.215.49]) by mx.google.com with ESMTPS id j2si1593417lfe.166.2015.09.22.17.41.16 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 22 Sep 2015 17:41:16 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.49 as permitted sender) client-ip=209.85.215.49; Received: by lacao8 with SMTP id ao8so9646436lac.3 for ; Tue, 22 Sep 2015 17:41:16 -0700 (PDT) X-Received: by 10.25.20.80 with SMTP id k77mr3230732lfi.117.1442968876331; Tue, 22 Sep 2015 17:41:16 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.59.35 with SMTP id w3csp789779lbq; Tue, 22 Sep 2015 17:41:15 -0700 (PDT) X-Received: by 10.66.155.231 with SMTP id vz7mr33774374pab.58.1442968875227; Tue, 22 Sep 2015 17:41:15 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id l15si6361182pbq.113.2015.09.22.17.41.14 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 22 Sep 2015 17:41:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZeY5v-0005uI-Na; Wed, 23 Sep 2015 00:39:35 +0000 Received: from mail-pa0-f41.google.com ([209.85.220.41]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZeY5K-0005Ql-FF for linux-arm-kernel@lists.infradead.org; Wed, 23 Sep 2015 00:39:00 +0000 Received: by pacbt3 with SMTP id bt3so5468595pac.3 for ; Tue, 22 Sep 2015 17:38:38 -0700 (PDT) X-Received: by 10.66.220.2 with SMTP id ps2mr33750368pac.128.1442968718056; Tue, 22 Sep 2015 17:38:38 -0700 (PDT) Received: from localhost.localdomain ([70.35.39.2]) by smtp.gmail.com with ESMTPSA id ja4sm1927162pbb.19.2015.09.22.17.38.37 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 22 Sep 2015 17:38:37 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, mark.rutland@arm.com, will.deacon@arm.com, catalin.marinas@arm.com Subject: [PATCH v2 4/7] arm64: mm: explicitly bootstrap the linear mapping Date: Tue, 22 Sep 2015 17:37:40 -0700 Message-Id: <1442968663-31843-5-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1442968663-31843-1-git-send-email-ard.biesheuvel@linaro.org> References: <1442968663-31843-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150922_173858_588109_251DA105 X-CRM114-Status: GOOD ( 24.19 ) X-Spam-Score: -2.6 (--) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-2.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.220.41 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [209.85.220.41 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Cc: Ard Biesheuvel MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.49 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 In preparation of moving the kernel text out of the linear mapping, ensure that the part of the kernel Image that contains the statically allocated page tables is made accessible via the linear mapping before performing the actual mapping of all of memory. This is needed by the normal mapping routines, that rely on the linear mapping to walk the page tables while manipulating them. In addition, explicitly map the start of DRAM and set the memblock limit so that all early memblock allocations are done from a region that is guaranteed to be mapped. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/vmlinux.lds.S | 19 +++- arch/arm64/mm/mmu.c | 109 ++++++++++++++------ 2 files changed, 98 insertions(+), 30 deletions(-) diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index ceec4def354b..0b82c4c203fb 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -68,6 +68,18 @@ PECOFF_FILE_ALIGNMENT = 0x200; #define ALIGN_DEBUG_RO_MIN(min) . = ALIGN(min); #endif +/* + * The pgdir region needs to be mappable using a single PMD or PUD sized region, + * so it should not cross a 512 MB or 1 GB alignment boundary, respectively + * (depending on page size). So align to a power-of-2 upper bound of the size + * of the entire __pgdir section. + */ +#if CONFIG_ARM64_PGTABLE_LEVELS == 2 +#define PGDIR_ALIGN (8 * PAGE_SIZE) +#else +#define PGDIR_ALIGN (16 * PAGE_SIZE) +#endif + SECTIONS { /* @@ -160,7 +172,7 @@ SECTIONS BSS_SECTION(0, 0, 0) - .pgdir (NOLOAD) : ALIGN(PAGE_SIZE) { + .pgdir (NOLOAD) : ALIGN(PGDIR_ALIGN) { idmap_pg_dir = .; . += IDMAP_DIR_SIZE; swapper_pg_dir = .; @@ -185,6 +197,11 @@ ASSERT(__idmap_text_end - (__idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K, "ID map text too big or misaligned") /* + * Check that the chosen PGDIR_ALIGN value is sufficient. + */ +ASSERT(SIZEOF(.pgdir) <= ALIGNOF(.pgdir), ".pgdir size exceeds its alignment") + +/* * If padding is applied before .head.text, virt<->phys conversions will fail. */ ASSERT(_text == (PAGE_OFFSET + TEXT_OFFSET), "HEAD is misaligned") diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 5af804334697..3f99cf1aaa0d 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -380,26 +380,92 @@ static void __init bootstrap_early_mapping(unsigned long addr, } } -static void __init map_mem(void) +/* + * Bootstrap a memory mapping in such a way that it does not require allocation + * of page tables beyond the ones that were allocated statically by + * bootstrap_early_mapping(). + * This is done by finding the memblock that covers pa_base, and intersecting + * it with the naturally aligned 512 MB of 1 GB region (depending on page size) + * that covers pa_base as well and (on 4k pages) round it to section size. + */ +static unsigned long __init bootstrap_region(struct bootstrap_pgtables *reg, + phys_addr_t pa_base, + unsigned long va_offset) { - struct memblock_region *reg; - phys_addr_t limit; + unsigned long va_base = __phys_to_virt(pa_base) + va_offset; + struct memblock_region *mr; + + bootstrap_early_mapping(va_base, reg, + IS_ENABLED(CONFIG_ARM64_64K_PAGES)); + + for_each_memblock(memory, mr) { + phys_addr_t start = mr->base; + phys_addr_t end = start + mr->size; + unsigned long vstart, vend; + + if (start > pa_base || end <= pa_base) + continue; + +#ifdef CONFIG_ARM64_64K_PAGES + /* clip the region to PMD size */ + vstart = max(va_base & PMD_MASK, + round_up(__phys_to_virt(start) + va_offset, + PAGE_SIZE)); + vend = min(round_up(va_base + 1, PMD_SIZE), + round_down(__phys_to_virt(end) + va_offset, + PAGE_SIZE)); +#else + /* clip the region to PUD size */ + vstart = max(va_base & PUD_MASK, + round_up(__phys_to_virt(start) + va_offset, + PMD_SIZE)); + vend = min(round_up(va_base + 1, PUD_SIZE), + round_down(__phys_to_virt(end) + va_offset, + PMD_SIZE)); +#endif + + create_mapping(__pa(vstart - va_offset), vstart, vend - vstart, + PAGE_KERNEL_EXEC); + + return vend; + } + return 0; +} + +/* + * Bootstrap the linear ranges that cover the start of DRAM and swapper_pg_dir + * so that the statically allocated page tables as well as newly allocated ones + * are accessible via the linear mapping. + */ +static void __init bootstrap_linear_mapping(unsigned long va_offset) +{ + static struct bootstrap_pgtables __pgdir bs_pgdir_low, bs_pgdir_high; + unsigned long vend; + + /* Bootstrap the mapping for the beginning of RAM */ + vend = bootstrap_region(&bs_pgdir_low, memblock_start_of_DRAM(), + va_offset); + BUG_ON(vend == 0); /* * Temporarily limit the memblock range. We need to do this as * create_mapping requires puds, pmds and ptes to be allocated from - * memory addressable from the initial direct kernel mapping. - * - * The initial direct kernel mapping, located at swapper_pg_dir, gives - * us PUD_SIZE (4K pages) or PMD_SIZE (64K pages) memory starting from - * PHYS_OFFSET (which must be aligned to 2MB as per - * Documentation/arm64/booting.txt). + * memory addressable from the early linear mapping. */ - if (IS_ENABLED(CONFIG_ARM64_64K_PAGES)) - limit = PHYS_OFFSET + PMD_SIZE; - else - limit = PHYS_OFFSET + PUD_SIZE; - memblock_set_current_limit(limit); + memblock_set_current_limit(__pa(vend - va_offset)); + + /* Bootstrap the linear mapping of the kernel image */ + vend = bootstrap_region(&bs_pgdir_high, __pa(swapper_pg_dir), + va_offset); + if (vend == 0) + panic("Kernel image not covered by memblock"); +} + +static void __init map_mem(void) +{ + struct memblock_region *reg; + + bootstrap_linear_mapping(0); /* map all the memory banks */ for_each_memblock(memory, reg) { @@ -409,21 +475,6 @@ static void __init map_mem(void) if (start >= end) break; -#ifndef CONFIG_ARM64_64K_PAGES - /* - * For the first memory bank align the start address and - * current memblock limit to prevent create_mapping() from - * allocating pte page tables from unmapped memory. - * When 64K pages are enabled, the pte page table for the - * first PGDIR_SIZE is already present in swapper_pg_dir. - */ - if (start < limit) - start = ALIGN(start, PMD_SIZE); - if (end < limit) { - limit = end & PMD_MASK; - memblock_set_current_limit(limit); - } -#endif __map_memblock(start, end); }