From patchwork Mon Nov 16 11:23:15 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 56580 Delivered-To: patch@linaro.org Received: by 10.112.155.196 with SMTP id vy4csp1257595lbb; Mon, 16 Nov 2015 03:26:18 -0800 (PST) X-Received: by 10.68.217.102 with SMTP id ox6mr27432076pbc.27.1447673173360; Mon, 16 Nov 2015 03:26:13 -0800 (PST) Return-Path: Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id sk1si50111868pbc.113.2015.11.16.03.26.13 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 16 Nov 2015 03:26:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) smtp.mailfrom=linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org; dkim=neutral (body hash did not verify) header.i=@linaro_org.20150623.gappssmtp.com Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZyHtk-0002YN-Th; Mon, 16 Nov 2015 11:24:36 +0000 Received: from mail-wm0-x235.google.com ([2a00:1450:400c:c09::235]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZyHtC-00029l-6c for linux-arm-kernel@lists.infradead.org; Mon, 16 Nov 2015 11:24:04 +0000 Received: by wmec201 with SMTP id c201so114844659wme.1 for ; Mon, 16 Nov 2015 03:23:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro_org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=PrGJ69j22H95wvW9ksEzmjf5Qtdkn3AtS3zQDsiyMCk=; b=t6ZQszk2q60IzzPs78gdpeSbJ5h985rpy08x+gkulu6n0KuxckmNoRe257maQoF1Mh 0+hXi4owKkwXYJBKKFKhnquZ5DhtRiX8v32zAUO9k2wtE0OVIxyHpy30y7qDoKdG9dfq 0Sf2Tppz4IeYT8joJzyg15ELELkRQy1D/SPg0Nxx/yqn19h231okS3QY7c0Q4mBuOpmT keSGkwC0K0DVMQYDwM1SiKO7r7bLLuKm8mRU0DjxU53TLsJdJFfAlaW6Wl/rg/4XhByK juhe58VTVP5DUeSw+YojPA04gStpMupJD3cSR3kvbXNT5LPS44QODk1wwfZ8fkVW/m9L iDng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=PrGJ69j22H95wvW9ksEzmjf5Qtdkn3AtS3zQDsiyMCk=; b=ddx4flNwIQ/TdD6ZjQOnnMvoBeVrzbQd2OWRYpzWdjwOfjNFlihOMR9kUyOHFhgU4I Jej/2/REA0gD6MeR/1nZMBNS4XLsRR0DzdZlTfdsQEoZlvv/31YRXQ3Sqrxbbj4L9Dfy ixyIYJkQzWpy4dHoCJsgG5ndHR+s55EillpKewip1CQJ3jiXLvRLKWIblemZMPrNLTLk 0sf0uUjMimcRLO3LoUl3tTAJS4jPzJxFbRHYQSG6LKELJw53IcaHcoFtBXFOxytLBxgr wd588It1eEM/qlBdq+gjaHoS1m5v6o80NJWdbpq9XrVMU0OZarxSUYnI2nwLAHTxjpmE s53A== X-Gm-Message-State: ALoCoQk+K8CQsGJnCT7s+EPnunC0OaxIb5NQtj7Vn3592lASRgNmTGT57KImeT1YX5+eB9/xl3uk X-Received: by 10.28.158.75 with SMTP id h72mr9710995wme.74.1447673020662; Mon, 16 Nov 2015 03:23:40 -0800 (PST) Received: from localhost.localdomain ([47.53.155.123]) by smtp.gmail.com with ESMTPSA id t126sm18062422wmd.18.2015.11.16.03.23.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 16 Nov 2015 03:23:39 -0800 (PST) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, mark.rutland@arm.com, catalin.marinas@arm.com, will.deacon@arm.com Subject: [PATCH v3 4/7] arm64: mm: explicitly bootstrap the linear mapping Date: Mon, 16 Nov 2015 12:23:15 +0100 Message-Id: <1447672998-20981-5-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1447672998-20981-1-git-send-email-ard.biesheuvel@linaro.org> References: <1447672998-20981-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151116_032402_755843_AD9574BA X-CRM114-Status: GOOD ( 22.56 ) X-Spam-Score: -2.6 (--) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-2.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [2a00:1450:400c:c09:0:0:0:235 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ard Biesheuvel , suzuki.poulose@arm.com, james.morse@arm.com, labbott@fedoraproject.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org In preparation of moving the kernel text out of the linear mapping, ensure that the part of the kernel Image that contains the statically allocated page tables is made accessible via the linear mapping before performing the actual mapping of all of memory. This is needed by the normal mapping routines, that rely on the linear mapping to walk the page tables while manipulating them. In addition, explicitly map the start of DRAM and set the memblock limit so that all early memblock allocations are done from a region that is guaranteed to be mapped. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/vmlinux.lds.S | 18 +++- arch/arm64/mm/mmu.c | 93 +++++++++++++++----- 2 files changed, 86 insertions(+), 25 deletions(-) -- 1.9.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 87a596246ec7..63fca196c09e 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -72,6 +72,17 @@ PECOFF_FILE_ALIGNMENT = 0x200; #define ALIGN_DEBUG_RO_MIN(min) . = ALIGN(min); #endif +/* + * The pgdir region needs to be mappable using a single PMD or PUD sized region, + * so align it to a power-of-2 upper bound of its size. 16k/4 levels needs 20 + * pages at the most, every other config needs at most 16 pages. + */ +#if defined(CONFIG_ARM64_16K_PAGES) && CONFIG_ARM64_PGTABLE_LEVELS == 4 +#define PGDIR_ALIGN (32 * PAGE_SIZE) +#else +#define PGDIR_ALIGN (16 * PAGE_SIZE) +#endif + SECTIONS { /* @@ -164,7 +175,7 @@ SECTIONS BSS_SECTION(0, 0, 0) - .pgdir (NOLOAD) : ALIGN(PAGE_SIZE) { + .pgdir (NOLOAD) : ALIGN(PGDIR_ALIGN) { idmap_pg_dir = .; . += IDMAP_DIR_SIZE; swapper_pg_dir = .; @@ -189,6 +200,11 @@ ASSERT(__idmap_text_end - (__idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K, "ID map text too big or misaligned") /* + * Check that the chosen PGDIR_ALIGN value is sufficient. + */ +ASSERT(SIZEOF(.pgdir) <= ALIGNOF(.pgdir), ".pgdir size exceeds its alignment") + +/* * If padding is applied before .head.text, virt<->phys conversions will fail. */ ASSERT(_text == (PAGE_OFFSET + TEXT_OFFSET), "HEAD is misaligned") diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 4f397a87c2be..81bb49eaa1a3 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -434,23 +434,86 @@ static void __init bootstrap_early_mapping(unsigned long addr, } } -static void __init map_mem(void) +/* + * Bootstrap a memory mapping in such a way that it does not require allocation + * of page tables beyond the ones that were allocated statically by + * bootstrap_early_mapping(). + * This is done by finding the memblock that covers pa_base, and intersecting + * it with the naturally aligned 512 MB, 32 MB or 1 GB region (depending on page + * size) that covers pa_base as well and (on 4k pages) round it to section size. + */ +static unsigned long __init bootstrap_region(struct bootstrap_pgtables *reg, + phys_addr_t pa_base, + unsigned long va_offset) { - struct memblock_region *reg; - phys_addr_t limit; + unsigned long va_base = __phys_to_virt(pa_base) + va_offset; + struct memblock_region *mr; + + bootstrap_early_mapping(va_base, reg, !ARM64_SWAPPER_USES_SECTION_MAPS); + + for_each_memblock(memory, mr) { + phys_addr_t start = mr->base; + phys_addr_t end = start + mr->size; + unsigned long vstart, vend; + + if (start > pa_base || end <= pa_base) + continue; + + /* clip the region to PMD size */ + vstart = max(round_down(va_base, 1 << SWAPPER_TABLE_SHIFT), + round_up(__phys_to_virt(start) + va_offset, + SWAPPER_BLOCK_SIZE)); + vend = min(round_up(va_base + 1, 1 << SWAPPER_TABLE_SHIFT), + round_down(__phys_to_virt(end) + va_offset, + SWAPPER_BLOCK_SIZE)); + + create_mapping(__pa(vstart - va_offset), vstart, vend - vstart, + PAGE_KERNEL_EXEC); + + return vend; + } + return 0; +} + +/* + * Bootstrap the linear ranges that cover the start of DRAM and swapper_pg_dir + * so that the statically allocated page tables as well as newly allocated ones + * are accessible via the linear mapping. + */ +static void __init bootstrap_linear_mapping(unsigned long va_offset) +{ + static struct bootstrap_pgtables __pgdir bs_pgdir_low, bs_pgdir_high; + unsigned long vend; + + /* Bootstrap the mapping for the beginning of RAM */ + vend = bootstrap_region(&bs_pgdir_low, memblock_start_of_DRAM(), + va_offset); + BUG_ON(vend == 0); /* * Temporarily limit the memblock range. We need to do this as * create_mapping requires puds, pmds and ptes to be allocated from - * memory addressable from the initial direct kernel mapping. + * memory addressable from the early linear mapping. * * The initial direct kernel mapping, located at swapper_pg_dir, gives * us PUD_SIZE (with SECTION maps) or PMD_SIZE (without SECTION maps, * memory starting from PHYS_OFFSET (which must be aligned to 2MB as * per Documentation/arm64/booting.txt). */ - limit = PHYS_OFFSET + SWAPPER_INIT_MAP_SIZE; - memblock_set_current_limit(limit); + memblock_set_current_limit(__pa(vend - va_offset)); + + /* Bootstrap the linear mapping of the kernel image */ + vend = bootstrap_region(&bs_pgdir_high, __pa(swapper_pg_dir), + va_offset); + if (vend == 0) + panic("Kernel image not covered by memblock"); +} + +static void __init map_mem(void) +{ + struct memblock_region *reg; + + bootstrap_linear_mapping(0); /* map all the memory banks */ for_each_memblock(memory, reg) { @@ -460,24 +523,6 @@ static void __init map_mem(void) if (start >= end) break; - if (ARM64_SWAPPER_USES_SECTION_MAPS) { - /* - * For the first memory bank align the start address and - * current memblock limit to prevent create_mapping() from - * allocating pte page tables from unmapped memory. With - * the section maps, if the first block doesn't end on section - * size boundary, create_mapping() will try to allocate a pte - * page, which may be returned from an unmapped area. - * When section maps are not used, the pte page table for the - * current limit is already present in swapper_pg_dir. - */ - if (start < limit) - start = ALIGN(start, SECTION_SIZE); - if (end < limit) { - limit = end & SECTION_MASK; - memblock_set_current_limit(limit); - } - } __map_memblock(start, end); }