From patchwork Thu Jun 19 10:49:21 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 32211 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pa0-f72.google.com (mail-pa0-f72.google.com [209.85.220.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 41E15203F4 for ; Thu, 19 Jun 2014 10:51:59 +0000 (UTC) Received: by mail-pa0-f72.google.com with SMTP id rd3sf7492740pab.11 for ; Thu, 19 Jun 2014 03:51:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:content-type:content-transfer-encoding; bh=XV715K8rz/V2i1VtrHofVVRZ7z/FxQRqtBo9aFJuudU=; b=AYpOrQfo9fv8SL8PeyWQ4ZnTmfKnedhJPMFXj0zRn1IeFLnkidk8lbmHFPCHOcLDqT EYamAK8asU3a0IWM30KuJ7nHGo02CsaecTyRyHL153IgkXZTOI/PoNBfMtave9K56JLT IQlOpAashJkqmquPgvnO5XtrTKWJ//ucjvTr1CT4Dz8K5xK5xU3vDd6ZG3d/OZQDvWGa kDvf0tn+v9jESodpoVR/GPEnGTkDOQz008iddj3yOvfF07iK01xR+p7VNbWm7AaDkGqH 6tTr1L17AOf/eunnviuc8yGuVMuo9oSRBnp1jiQ8/3NSRPZVBc64meXzLzOnOhth9gwt F0HQ== X-Gm-Message-State: ALoCoQkY0ErTZSc50bfIMFNGp1RHDIMICA+OpLcRtKH7cZhfow5NnrnZHaO2N9L0aLtSU+hzStli X-Received: by 10.66.66.35 with SMTP id c3mr2036505pat.7.1403175119341; Thu, 19 Jun 2014 03:51:59 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.108.2 with SMTP id i2ls507865qgf.78.gmail; Thu, 19 Jun 2014 03:51:59 -0700 (PDT) X-Received: by 10.220.103.141 with SMTP id k13mr3309313vco.25.1403175119241; Thu, 19 Jun 2014 03:51:59 -0700 (PDT) Received: from mail-ve0-f171.google.com (mail-ve0-f171.google.com [209.85.128.171]) by mx.google.com with ESMTPS id ao15si2177205vdc.104.2014.06.19.03.51.59 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 19 Jun 2014 03:51:59 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.171 as permitted sender) client-ip=209.85.128.171; Received: by mail-ve0-f171.google.com with SMTP id jz11so2109901veb.2 for ; Thu, 19 Jun 2014 03:51:59 -0700 (PDT) X-Received: by 10.221.64.80 with SMTP id xh16mr1201351vcb.35.1403175119137; Thu, 19 Jun 2014 03:51:59 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.54.6 with SMTP id vs6csp352560vcb; Thu, 19 Jun 2014 03:51:58 -0700 (PDT) X-Received: by 10.224.135.132 with SMTP id n4mr5786159qat.23.1403175118566; Thu, 19 Jun 2014 03:51:58 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id r8si6001997qak.86.2014.06.19.03.51.58 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 19 Jun 2014 03:51:58 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WxZve-0005T9-Kr; Thu, 19 Jun 2014 10:50:50 +0000 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WxZva-0005FY-FV for linux-arm-kernel@lists.infradead.org; Thu, 19 Jun 2014 10:50:47 +0000 Received: from leverpostej.cambridge.arm.com (leverpostej.cambridge.arm.com [10.1.205.151]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id s5JAnSwq019478; Thu, 19 Jun 2014 11:50:20 +0100 (BST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Subject: [PATCHv3 2/4] arm64: place initial page tables above the kernel Date: Thu, 19 Jun 2014 11:49:21 +0100 Message-Id: <1403174963-10730-3-git-send-email-mark.rutland@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1403174963-10730-1-git-send-email-mark.rutland@arm.com> References: <1403174963-10730-1-git-send-email-mark.rutland@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140619_035046_890239_B973F01D X-CRM114-Status: GOOD ( 16.74 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.96.50 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 T_RP_MATCHES_RCVD Envelope sender domain matches handover relay domain Cc: Mark Rutland , rob.herring@linaro.org, lauraa@codeaurora.org, peter.maydell@linaro.org, geoff@infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, leif.lindholm@linaro.org, marc.zyngier@arm.com, kevin.hilman@linaro.org, ijc@hellion.org.uk, trini@ti.com, dave.martin@arm.com X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: mark.rutland@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.171 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Currently we place swapper_pg_dir and idmap_pg_dir below the kernel image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However, bootloaders may use portions of this memory below the kernel and we do not parse the memory reservation list until after the MMU has been enabled. As such we may clobber some memory a bootloader wishes to have preserved. To enable the use of all of this memory by bootloaders (when the required memory reservations are communicated to the kernel) it is necessary to move our initial page tables elsewhere. As we currently have an effectively unbound requirement for memory at the end of the kernel image for .bss, we can place the page tables here. This patch moves the initial page table to the end of the kernel image, after the BSS. As they do not consist of any initialised data they will be stripped from the kernel Image as with the BSS. The BSS clearing routine is updated to stop at __bss_stop rather than _end so as to not clobber the page tables, and memory reservations made redundant by the new organisation are removed. Signed-off-by: Mark Rutland Tested-by: Laura Abbott --- arch/arm64/include/asm/page.h | 9 +++++++++ arch/arm64/kernel/head.S | 28 ++++++++-------------------- arch/arm64/kernel/vmlinux.lds.S | 7 +++++++ arch/arm64/mm/init.c | 12 ++++-------- 4 files changed, 28 insertions(+), 28 deletions(-) diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 46bf666..a6331e6 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -31,6 +31,15 @@ /* We do define AT_SYSINFO_EHDR but don't use the gate mechanism */ #define __HAVE_ARCH_GATE_AREA 1 +/* + * The idmap and swapper page tables need some space reserved in the kernel + * image. The idmap only requires a pgd and a next level table to (section) map + * the kernel, while the swapper also maps the FDT and requires an additional + * table to map an early UART. See __create_page_tables for more information. + */ +#define SWAPPER_DIR_SIZE (3 * PAGE_SIZE) +#define IDMAP_DIR_SIZE (2 * PAGE_SIZE) + #ifndef __ASSEMBLY__ #ifdef CONFIG_ARM64_64K_PAGES diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 7ec7817..e048f2b 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -35,29 +35,17 @@ #include #include -/* - * swapper_pg_dir is the virtual address of the initial page table. We place - * the page tables 3 * PAGE_SIZE below KERNEL_RAM_VADDR. The idmap_pg_dir has - * 2 pages and is placed below swapper_pg_dir. - */ #define KERNEL_RAM_VADDR (PAGE_OFFSET + TEXT_OFFSET) #if (KERNEL_RAM_VADDR & 0xfffff) != 0x80000 #error KERNEL_RAM_VADDR must start at 0xXXX80000 #endif -#define SWAPPER_DIR_SIZE (3 * PAGE_SIZE) -#define IDMAP_DIR_SIZE (2 * PAGE_SIZE) - - .globl swapper_pg_dir - .equ swapper_pg_dir, KERNEL_RAM_VADDR - SWAPPER_DIR_SIZE - - .globl idmap_pg_dir - .equ idmap_pg_dir, swapper_pg_dir - IDMAP_DIR_SIZE - - .macro pgtbl, ttb0, ttb1, phys - add \ttb1, \phys, #TEXT_OFFSET - SWAPPER_DIR_SIZE - sub \ttb0, \ttb1, #IDMAP_DIR_SIZE + .macro pgtbl, ttb0, ttb1, virt_to_phys + ldr \ttb1, =swapper_pg_dir + ldr \ttb0, =idmap_pg_dir + add \ttb1, \ttb1, \virt_to_phys + add \ttb0, \ttb0, \virt_to_phys .endm #ifdef CONFIG_ARM64_64K_PAGES @@ -414,7 +402,7 @@ ENTRY(secondary_startup) mov x23, x0 // x23=current cpu_table cbz x23, __error_p // invalid processor (x23=0)? - pgtbl x25, x26, x24 // x25=TTBR0, x26=TTBR1 + pgtbl x25, x26, x28 // x25=TTBR0, x26=TTBR1 ldr x12, [x23, #CPU_INFO_SETUP] add x12, x12, x28 // __virt_to_phys blr x12 // initialise processor @@ -528,7 +516,7 @@ ENDPROC(__calc_phys_offset) * - pgd entry for fixed mappings (TTBR1) */ __create_page_tables: - pgtbl x25, x26, x24 // idmap_pg_dir and swapper_pg_dir addresses + pgtbl x25, x26, x28 // idmap_pg_dir and swapper_pg_dir addresses mov x27, lr /* @@ -617,7 +605,7 @@ ENDPROC(__create_page_tables) __switch_data: .quad __mmap_switched .quad __bss_start // x6 - .quad _end // x7 + .quad __bss_stop // x7 .quad processor_id // x4 .quad __fdt_pointer // x5 .quad memstart_addr // x6 diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index f1e6d5c..c6648d3 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -104,6 +104,13 @@ SECTIONS _edata = .; BSS_SECTION(0, 0, 0) + + . = ALIGN(PAGE_SIZE); + idmap_pg_dir = .; + . += IDMAP_DIR_SIZE; + swapper_pg_dir = .; + . += SWAPPER_DIR_SIZE; + _end = .; STABS_DEBUG diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 091d428..35bca76 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -126,20 +126,16 @@ static void arm64_memory_present(void) void __init arm64_memblock_init(void) { - /* Register the kernel text, kernel data and initrd with memblock */ + /* + * Register the kernel text, kernel data, initrd, and initial + * pagetables with memblock. + */ memblock_reserve(__pa(_text), _end - _text); #ifdef CONFIG_BLK_DEV_INITRD if (initrd_start) memblock_reserve(__virt_to_phys(initrd_start), initrd_end - initrd_start); #endif - /* - * Reserve the page tables. These are already in use, - * and can only be in node 0. - */ - memblock_reserve(__pa(swapper_pg_dir), SWAPPER_DIR_SIZE); - memblock_reserve(__pa(idmap_pg_dir), IDMAP_DIR_SIZE); - early_init_fdt_scan_reserved_mem(); dma_contiguous_reserve(0);