From patchwork Fri Apr 10 13:53:52 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 47048 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f197.google.com (mail-wi0-f197.google.com [209.85.212.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 8EF1D218D9 for ; Fri, 10 Apr 2015 14:00:57 +0000 (UTC) Received: by wixv7 with SMTP id v7sf5287445wix.0 for ; Fri, 10 Apr 2015 07:00:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=ixhSSj2QXwwx8FSw+aftxd3ZPxCC4Qetkt/Dt89l5Q0=; b=T59VVMsUnhR+lk6/KJezEU6zkDOoF0HZP0lgkaFeC51Sh/M+iioR4AtmMNX/JP8EXv fPBNJV8fxwQk3txiYXSG1I3ZGrL0zCLokVQ9V9szpORQlJdyt/Sjp5U4bmEeMLvmeius +ku4dOTyDfYzn/OeuzWXdAS2K/UqTJkxHBidQUjtIWpl9fM1kO2fpDfzAefMi6BuTdIQ 17FOt3wzNxs/uYw7b7sB1TqkTUF2dcrMelQOC545XwmZdTd0Bp1DUhny3J6LKA6SIRQ/ G6Ddvm18dC02k/pnnsJFSleFK+URwc5UcSUAOeX1KnrFvZHXI20f923AZzLzvYDy77Ik +q0w== X-Gm-Message-State: ALoCoQn8t9xKg50vbtwlqQ6Shz68UV6V/cPrE4KbibNaLt2c4G8DSzGT3bRSse60dDWo+5xi6xnG X-Received: by 10.180.106.136 with SMTP id gu8mr1532062wib.6.1428674456899; Fri, 10 Apr 2015 07:00:56 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.8.1 with SMTP id n1ls385740laa.87.gmail; Fri, 10 Apr 2015 07:00:56 -0700 (PDT) X-Received: by 10.112.163.168 with SMTP id yj8mr1189461lbb.36.1428674456746; Fri, 10 Apr 2015 07:00:56 -0700 (PDT) Received: from mail-lb0-f175.google.com (mail-lb0-f175.google.com. [209.85.217.175]) by mx.google.com with ESMTPS id kz8si1551908lac.108.2015.04.10.07.00.56 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 10 Apr 2015 07:00:56 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.175 as permitted sender) client-ip=209.85.217.175; Received: by lbbuc2 with SMTP id uc2so14362296lbb.2 for ; Fri, 10 Apr 2015 07:00:56 -0700 (PDT) X-Received: by 10.112.160.227 with SMTP id xn3mr1572902lbb.112.1428674456460; Fri, 10 Apr 2015 07:00:56 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp1120189lbt; Fri, 10 Apr 2015 07:00:55 -0700 (PDT) X-Received: by 10.70.39.99 with SMTP id o3mr3021653pdk.10.1428674454646; Fri, 10 Apr 2015 07:00:54 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id v5si3060036pdo.216.2015.04.10.07.00.53 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 10 Apr 2015 07:00:54 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YgZSD-0001lw-86; Fri, 10 Apr 2015 13:58:41 +0000 Received: from mail-wi0-f170.google.com ([209.85.212.170]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YgZOL-0006JC-Sm for linux-arm-kernel@lists.infradead.org; Fri, 10 Apr 2015 13:54:43 +0000 Received: by wiaa2 with SMTP id a2so27479399wia.0 for ; Fri, 10 Apr 2015 06:54:23 -0700 (PDT) X-Received: by 10.181.27.229 with SMTP id jj5mr5159298wid.87.1428674063598; Fri, 10 Apr 2015 06:54:23 -0700 (PDT) Received: from ards-macbook-pro.local ([84.78.25.50]) by mx.google.com with ESMTPSA id e2sm3051482wjy.46.2015.04.10.06.54.21 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 10 Apr 2015 06:54:22 -0700 (PDT) From: Ard Biesheuvel To: mark.rutland@arm.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org Subject: [PATCH v3 08/11] arm64: mm: explicitly bootstrap the linear mapping Date: Fri, 10 Apr 2015 15:53:52 +0200 Message-Id: <1428674035-26603-9-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1428674035-26603-1-git-send-email-ard.biesheuvel@linaro.org> References: <1428674035-26603-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150410_065442_143927_1E45116A X-CRM114-Status: GOOD ( 19.33 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.212.170 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [209.85.212.170 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: Ard Biesheuvel X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.175 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 In preparation of moving the kernel text out of the linear mapping, ensure that the part of the kernel Image that contains the statically allocated page tables is made accessible via the linear mapping before performing the actual mapping of all of memory. This is needed by the normal mapping routines, that rely on the linear mapping to walk the page tables while manipulating them. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/vmlinux.lds.S | 10 +++- arch/arm64/mm/mmu.c | 104 ++++++++++++++++++++++++++++------------ 2 files changed, 81 insertions(+), 33 deletions(-) diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 604f285d3832..b7cdf4feb9f1 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -160,8 +160,7 @@ SECTIONS BSS_SECTION(0, 0, 0) - .pgdir (NOLOAD) : { - . = ALIGN(PAGE_SIZE); + .pgdir (NOLOAD) : ALIGN(SZ_1M) { idmap_pg_dir = .; . += IDMAP_DIR_SIZE; swapper_pg_dir = .; @@ -187,6 +186,13 @@ ASSERT(__idmap_text_end - (__idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K, "ID map text too big or misaligned") /* + * The pgdir region needs to be mappable using a single PMD or PUD sized region, + * so it should not cross a 512 MB or 1 GB alignment boundary, respectively + * (depending on page size). So align to an upper bound of its size. + */ +ASSERT(SIZEOF(.pgdir) < ALIGNOF(.pgdir), ".pgdir size exceeds its alignment") + +/* * If padding is applied before .head.text, virt<->phys conversions will fail. */ ASSERT(_text == (PAGE_OFFSET + TEXT_OFFSET), "HEAD is misaligned") diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index c0427b5c90c7..ea35ec911393 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -405,26 +405,83 @@ static void __init bootstrap_mem_region(unsigned long addr, } } +static void __init bootstrap_linear_mapping(void) +{ + /* + * Bootstrap the linear range that covers swapper_pg_dir so that the + * statically allocated page tables are accessible via the linear + * mapping. This allows us to start using the normal create_mapping() + * logic which relies on the ability to translate physical addresses + * contained in page table entries to virtual addresses using __va(). + */ + static struct mem_bootstrap_region linear_bs_region __pgdir; + const phys_addr_t swapper_phys = __pa(swapper_pg_dir); + const unsigned long swapper_virt = __phys_to_virt(swapper_phys); + struct memblock_region *reg; + pmd_t *pmd; + pte_t *pte; + + bootstrap_mem_region(swapper_virt, &linear_bs_region, &pmd, + IS_ENABLED(CONFIG_ARM64_64K_PAGES) ? &pte : NULL); + + /* now find the memblock that covers swapper_pg_dir, and clip */ + for_each_memblock(memory, reg) { + phys_addr_t start = reg->base; + phys_addr_t end = start + reg->size; + unsigned long vstart, vend; + + if (start > swapper_phys || end <= swapper_phys) + continue; + +#ifdef CONFIG_ARM64_64K_PAGES + /* clip the region to PMD size */ + vstart = max(swapper_virt & PMD_MASK, __phys_to_virt(start)); + vend = min(round_up(swapper_virt, PMD_SIZE), + __phys_to_virt(end)); + + vstart = round_up(vstart, PAGE_SIZE); + vend = round_down(vend, PAGE_SIZE); + + pte += pte_index(vstart); + do { + set_pte(pte++, __pte(__pa(vstart) | PAGE_KERNEL_EXEC)); + vstart += PAGE_SIZE; + } while (vstart < vend); +#else + /* clip the region to PUD size */ + vstart = max(swapper_virt & PUD_MASK, __phys_to_virt(start)); + vend = min(round_up(swapper_virt, PUD_SIZE), + __phys_to_virt(end)); + + vstart = round_up(vstart, PMD_SIZE); + vend = round_down(vend, PMD_SIZE); + + pmd += pmd_index(vstart); + do { + set_pmd(pmd++, + __pmd(__pa(vstart) | PROT_SECT_NORMAL_EXEC)); + vstart += PMD_SIZE; + } while (vstart < vend); +#endif + + /* + * Temporarily limit the memblock range. We need to do this as + * create_mapping requires puds, pmds and ptes to be allocated + * from memory addressable from the initial direct kernel + * mapping. + */ + memblock_set_current_limit(__pa(vend)); + + return; + } + BUG(); +} + static void __init map_mem(void) { struct memblock_region *reg; - phys_addr_t limit; - /* - * Temporarily limit the memblock range. We need to do this as - * create_mapping requires puds, pmds and ptes to be allocated from - * memory addressable from the initial direct kernel mapping. - * - * The initial direct kernel mapping, located at swapper_pg_dir, gives - * us PUD_SIZE (4K pages) or PMD_SIZE (64K pages) memory starting from - * PHYS_OFFSET (which must be aligned to 2MB as per - * Documentation/arm64/booting.txt). - */ - if (IS_ENABLED(CONFIG_ARM64_64K_PAGES)) - limit = PHYS_OFFSET + PMD_SIZE; - else - limit = PHYS_OFFSET + PUD_SIZE; - memblock_set_current_limit(limit); + bootstrap_linear_mapping(); /* map all the memory banks */ for_each_memblock(memory, reg) { @@ -434,21 +491,6 @@ static void __init map_mem(void) if (start >= end) break; -#ifndef CONFIG_ARM64_64K_PAGES - /* - * For the first memory bank align the start address and - * current memblock limit to prevent create_mapping() from - * allocating pte page tables from unmapped memory. - * When 64K pages are enabled, the pte page table for the - * first PGDIR_SIZE is already present in swapper_pg_dir. - */ - if (start < limit) - start = ALIGN(start, PMD_SIZE); - if (end < limit) { - limit = end & PMD_MASK; - memblock_set_current_limit(limit); - } -#endif __map_memblock(start, end); }