From patchwork Wed Apr 15 15:34:20 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 47207 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f199.google.com (mail-wi0-f199.google.com [209.85.212.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 29A302121F for ; Wed, 15 Apr 2015 15:41:10 +0000 (UTC) Received: by widjs5 with SMTP id js5sf12421400wid.3 for ; Wed, 15 Apr 2015 08:41:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=pHoWbmbEQQ/PGV/uJTv1+3/TBPxdbYlH83/j5/AEM4w=; b=m4jmEfnGMOzkiNOBr5e2blMoLwgHS4ryf0wIhjYTyk/zH6d1rgvxm3r56PXyhyGJAE JH4Jmch2gksg4j2bd9RNNsWXizVk+e2ocZaAPrTEnOWENGQCrsftgtijcqHvPLu9mxSS OB8pogLRUIcrI3Z7x8q1fvRUvAhrXHLzTnmqTDFYBuV9+27ZLcRznqk6xk0WMLp3g2Mg 1y4Mc4rJ6dheiQ0UVv3Yj+01YsV4UerWN7KDyqg6J3Z9qlLlrmPdIwjqbToH+wzyvB8/ SpW1pGvjRhtfM2UNiVW3iGW8sv9zOXPtFolPYM6VGK36QLmipoR/d/tPt1J9FuTDvkVL dZow== X-Gm-Message-State: ALoCoQn+tOL6VotTk7nLXTalKlt7/+Ars14fq/xR0Hid2oiKiFBKZgxX0RYP+As0F+bU7tU+SHxO X-Received: by 10.112.13.200 with SMTP id j8mr3785114lbc.14.1429112469434; Wed, 15 Apr 2015 08:41:09 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.42.172 with SMTP id p12ls215405lal.40.gmail; Wed, 15 Apr 2015 08:41:09 -0700 (PDT) X-Received: by 10.152.5.7 with SMTP id o7mr24178771lao.51.1429112469296; Wed, 15 Apr 2015 08:41:09 -0700 (PDT) Received: from mail-la0-f46.google.com (mail-la0-f46.google.com. [209.85.215.46]) by mx.google.com with ESMTPS id jc5si4168606lbc.127.2015.04.15.08.41.09 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 15 Apr 2015 08:41:09 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.46 as permitted sender) client-ip=209.85.215.46; Received: by lagv1 with SMTP id v1so35822448lag.3 for ; Wed, 15 Apr 2015 08:41:09 -0700 (PDT) X-Received: by 10.152.116.11 with SMTP id js11mr24492144lab.106.1429112469156; Wed, 15 Apr 2015 08:41:09 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp2593541lbt; Wed, 15 Apr 2015 08:41:08 -0700 (PDT) X-Received: by 10.70.135.106 with SMTP id pr10mr46999534pdb.156.1429112467012; Wed, 15 Apr 2015 08:41:07 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id u11si7561178pbs.163.2015.04.15.08.41.06 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 15 Apr 2015 08:41:07 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YiPPL-0003AJ-CJ; Wed, 15 Apr 2015 15:39:19 +0000 Received: from mail-wi0-f179.google.com ([209.85.212.179]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YiPMR-00015C-Mu for linux-arm-kernel@lists.infradead.org; Wed, 15 Apr 2015 15:36:20 +0000 Received: by widdi4 with SMTP id di4so159718970wid.0 for ; Wed, 15 Apr 2015 08:35:57 -0700 (PDT) X-Received: by 10.194.86.135 with SMTP id p7mr51651128wjz.89.1429112157731; Wed, 15 Apr 2015 08:35:57 -0700 (PDT) Received: from ards-macbook-pro.local ([90.174.5.175]) by mx.google.com with ESMTPSA id eh5sm7674765wic.20.2015.04.15.08.35.52 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 15 Apr 2015 08:35:56 -0700 (PDT) From: Ard Biesheuvel To: mark.rutland@arm.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 09/13] arm64: mm: explicitly bootstrap the linear mapping Date: Wed, 15 Apr 2015 17:34:20 +0200 Message-Id: <1429112064-19952-10-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1429112064-19952-1-git-send-email-ard.biesheuvel@linaro.org> References: <1429112064-19952-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150415_083619_987182_A51AB6FE X-CRM114-Status: GOOD ( 18.72 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.212.179 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [209.85.212.179 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: Ard Biesheuvel X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.46 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 In preparation of moving the kernel text out of the linear mapping, ensure that the part of the kernel Image that contains the statically allocated page tables is made accessible via the linear mapping before performing the actual mapping of all of memory. This is needed by the normal mapping routines, that rely on the linear mapping to walk the page tables while manipulating them. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/vmlinux.lds.S | 18 ++++++++- arch/arm64/mm/mmu.c | 89 +++++++++++++++++++++++++++-------------- 2 files changed, 75 insertions(+), 32 deletions(-) diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index ceec4def354b..338eaa7bcbfd 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -68,6 +68,17 @@ PECOFF_FILE_ALIGNMENT = 0x200; #define ALIGN_DEBUG_RO_MIN(min) . = ALIGN(min); #endif +/* + * The pgdir region needs to be mappable using a single PMD or PUD sized region, + * so it should not cross a 512 MB or 1 GB alignment boundary, respectively + * (depending on page size). So align to an upper bound of its size. + */ +#if CONFIG_ARM64_PGTABLE_LEVELS == 2 +#define PGDIR_ALIGN (8 * PAGE_SIZE) +#else +#define PGDIR_ALIGN (16 * PAGE_SIZE) +#endif + SECTIONS { /* @@ -160,7 +171,7 @@ SECTIONS BSS_SECTION(0, 0, 0) - .pgdir (NOLOAD) : ALIGN(PAGE_SIZE) { + .pgdir (NOLOAD) : ALIGN(PGDIR_ALIGN) { idmap_pg_dir = .; . += IDMAP_DIR_SIZE; swapper_pg_dir = .; @@ -185,6 +196,11 @@ ASSERT(__idmap_text_end - (__idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K, "ID map text too big or misaligned") /* + * Check that the chosen PGDIR_ALIGN value if sufficient. + */ +ASSERT(SIZEOF(.pgdir) < ALIGNOF(.pgdir), ".pgdir size exceeds its alignment") + +/* * If padding is applied before .head.text, virt<->phys conversions will fail. */ ASSERT(_text == (PAGE_OFFSET + TEXT_OFFSET), "HEAD is misaligned") diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index c27ab20a5ba9..93e5a2497f01 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -380,26 +380,68 @@ static void __init bootstrap_early_mapping(unsigned long addr, } } +static void __init bootstrap_linear_mapping(unsigned long va_offset) +{ + /* + * Bootstrap the linear range that covers swapper_pg_dir so that the + * statically allocated page tables as well as newly allocated ones + * are accessible via the linear mapping. + */ + static struct bootstrap_pgtables linear_bs_pgtables __pgdir; + const phys_addr_t swapper_phys = __pa(swapper_pg_dir); + unsigned long swapper_virt = __phys_to_virt(swapper_phys) + va_offset; + struct memblock_region *reg; + + bootstrap_early_mapping(swapper_virt, &linear_bs_pgtables, + IS_ENABLED(CONFIG_ARM64_64K_PAGES)); + + /* now find the memblock that covers swapper_pg_dir, and clip */ + for_each_memblock(memory, reg) { + phys_addr_t start = reg->base; + phys_addr_t end = start + reg->size; + unsigned long vstart, vend; + + if (start > swapper_phys || end <= swapper_phys) + continue; + +#ifdef CONFIG_ARM64_64K_PAGES + /* clip the region to PMD size */ + vstart = max(swapper_virt & PMD_MASK, + round_up(__phys_to_virt(start + va_offset), + PAGE_SIZE)); + vend = min(round_up(swapper_virt, PMD_SIZE), + round_down(__phys_to_virt(end + va_offset), + PAGE_SIZE)); +#else + /* clip the region to PUD size */ + vstart = max(swapper_virt & PUD_MASK, + round_up(__phys_to_virt(start + va_offset), + PMD_SIZE)); + vend = min(round_up(swapper_virt, PUD_SIZE), + round_down(__phys_to_virt(end + va_offset), + PMD_SIZE)); +#endif + + create_mapping(__pa(vstart - va_offset), vstart, vend - vstart, + PAGE_KERNEL_EXEC); + + /* + * Temporarily limit the memblock range. We need to do this as + * create_mapping requires puds, pmds and ptes to be allocated + * from memory addressable from the early linear mapping. + */ + memblock_set_current_limit(__pa(vend - va_offset)); + + return; + } + BUG(); +} + static void __init map_mem(void) { struct memblock_region *reg; - phys_addr_t limit; - /* - * Temporarily limit the memblock range. We need to do this as - * create_mapping requires puds, pmds and ptes to be allocated from - * memory addressable from the initial direct kernel mapping. - * - * The initial direct kernel mapping, located at swapper_pg_dir, gives - * us PUD_SIZE (4K pages) or PMD_SIZE (64K pages) memory starting from - * PHYS_OFFSET (which must be aligned to 2MB as per - * Documentation/arm64/booting.txt). - */ - if (IS_ENABLED(CONFIG_ARM64_64K_PAGES)) - limit = PHYS_OFFSET + PMD_SIZE; - else - limit = PHYS_OFFSET + PUD_SIZE; - memblock_set_current_limit(limit); + bootstrap_linear_mapping(0); /* map all the memory banks */ for_each_memblock(memory, reg) { @@ -409,21 +451,6 @@ static void __init map_mem(void) if (start >= end) break; -#ifndef CONFIG_ARM64_64K_PAGES - /* - * For the first memory bank align the start address and - * current memblock limit to prevent create_mapping() from - * allocating pte page tables from unmapped memory. - * When 64K pages are enabled, the pte page table for the - * first PGDIR_SIZE is already present in swapper_pg_dir. - */ - if (start < limit) - start = ALIGN(start, PMD_SIZE); - if (end < limit) { - limit = end & PMD_MASK; - memblock_set_current_limit(limit); - } -#endif __map_memblock(start, end); }