From patchwork Mon May 11 07:13:07 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 48216 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f197.google.com (mail-wi0-f197.google.com [209.85.212.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 2B5322121F for ; Mon, 11 May 2015 07:20:23 +0000 (UTC) Received: by wiz9 with SMTP id 9sf7517797wiz.3 for ; Mon, 11 May 2015 00:20:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=c+0tZC4qUgD6dl4XiqaFLBih30bqyi3cD7tHYmZk7y0=; b=dAGmjXErv6F+XcBvx1m/93vXl1Rxzh982S5xYWdZt3BVVwHBA3pmp5mm8AsAmIhU6R IBjYH/Ziswu4wi5bKMjpSDzRFm1Lz9LR7NN3+KN/ryqOR8K2lFi7AU13+7MvCvzku/2M Fh4VesiOnDefdu3ESuEDPcEE9x5nyR9aWArYCC4HPdEfGzsH+yVPKh2UBUdw9Rx5+aaz khWBJ9kPh5Asfw70u76eX2JxlnS9lC2tp0WjTKYg73oADD71Tn4o1R3hPQS2V/sufrBd Y3PH6wk1VYQ70c1so0NtL4o/t0uQ24rzlStfOx0VE6gwpSevvkxBI+Jssblz0aisOAmf e/1g== X-Gm-Message-State: ALoCoQkeUi6ZX0fEWxNFkuN9nXSn0uGI/2u6Jnd3zhpm5IwAXTi1E5EFMQQYm5JhMlBIhVMcwhGV X-Received: by 10.112.166.137 with SMTP id zg9mr6600573lbb.11.1431328822432; Mon, 11 May 2015 00:20:22 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.207.65 with SMTP id lu1ls553364lac.104.gmail; Mon, 11 May 2015 00:20:22 -0700 (PDT) X-Received: by 10.152.5.164 with SMTP id t4mr6971189lat.16.1431328822286; Mon, 11 May 2015 00:20:22 -0700 (PDT) Received: from mail-la0-f51.google.com (mail-la0-f51.google.com. [209.85.215.51]) by mx.google.com with ESMTPS id r6si7858291lag.118.2015.05.11.00.20.22 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 11 May 2015 00:20:22 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.51 as permitted sender) client-ip=209.85.215.51; Received: by layy10 with SMTP id y10so87028507lay.0 for ; Mon, 11 May 2015 00:20:22 -0700 (PDT) X-Received: by 10.112.204.104 with SMTP id kx8mr6946005lbc.72.1431328822153; Mon, 11 May 2015 00:20:22 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp1347244lbb; Mon, 11 May 2015 00:20:20 -0700 (PDT) X-Received: by 10.66.102.66 with SMTP id fm2mr4115124pab.78.1431328820180; Mon, 11 May 2015 00:20:20 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id pv3si2051766pac.49.2015.05.11.00.20.19 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 11 May 2015 00:20:20 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YrhyH-00052Q-I1; Mon, 11 May 2015 07:17:49 +0000 Received: from mail-wi0-f181.google.com ([209.85.212.181]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YrhuR-0000kd-RD for linux-arm-kernel@lists.infradead.org; Mon, 11 May 2015 07:13:53 +0000 Received: by wizk4 with SMTP id k4so93529327wiz.1 for ; Mon, 11 May 2015 00:13:30 -0700 (PDT) X-Received: by 10.180.88.99 with SMTP id bf3mr17927993wib.75.1431328410028; Mon, 11 May 2015 00:13:30 -0700 (PDT) Received: from localhost.localdomain (cag06-7-83-153-85-71.fbx.proxad.net. [83.153.85.71]) by mx.google.com with ESMTPSA id o5sm10728933wia.0.2015.05.11.00.13.28 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 11 May 2015 00:13:29 -0700 (PDT) From: Ard Biesheuvel To: catalin.marinas@arm.com, mark.rutland@arm.com, linux-arm-kernel@lists.infradead.org Subject: [PATCH 09/10] arm64: allow kernel Image to be loaded anywhere in physical memory Date: Mon, 11 May 2015 09:13:07 +0200 Message-Id: <1431328388-3051-10-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1431328388-3051-1-git-send-email-ard.biesheuvel@linaro.org> References: <1431328388-3051-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150511_001352_195391_10EC2AD7 X-CRM114-Status: GOOD ( 21.81 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.212.181 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [209.85.212.181 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: Ard Biesheuvel X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.51 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 This relaxes the kernel Image placement requirements, so that it may be placed at any 2 MB aligned offset in physical memory. This is accomplished by ignoring PHYS_OFFSET when installing memblocks, and accounting for the apparent virtual offset of the kernel Image (in addition to the 64 MB that it is moved below PAGE_OFFSET). As a result, virtual address references below PAGE_OFFSET are correctly mapped onto physical references into the kernel Image regardless of where it sits in memory. Signed-off-by: Ard Biesheuvel --- Documentation/arm64/booting.txt | 20 +++++++++--------- arch/arm64/mm/init.c | 47 +++++++++++++++++++++++++++++++++++++---- arch/arm64/mm/mmu.c | 22 ++++++++++++++++--- 3 files changed, 72 insertions(+), 17 deletions(-) diff --git a/Documentation/arm64/booting.txt b/Documentation/arm64/booting.txt index 53f18e13d51c..7bd9feedb6f9 100644 --- a/Documentation/arm64/booting.txt +++ b/Documentation/arm64/booting.txt @@ -113,16 +113,16 @@ Header notes: depending on selected features, and is effectively unbound. The Image must be placed text_offset bytes from a 2MB aligned base -address near the start of usable system RAM and called there. Memory -below that base address is currently unusable by Linux, and therefore it -is strongly recommended that this location is the start of system RAM. -At least image_size bytes from the start of the image must be free for -use by the kernel. - -Any memory described to the kernel (even that below the 2MB aligned base -address) which is not marked as reserved from the kernel e.g. with a -memreserve region in the device tree) will be considered as available to -the kernel. +address anywhere in usable system RAM and called there. At least +image_size bytes from the start of the image must be free for use +by the kernel. +NOTE: versions prior to v4.2 cannot make use of memory below the +physical offset of the Image so it is recommended that the Image be +placed as close as possible to the start of system RAM. + +Any memory described to the kernel which is not marked as reserved from +the kernel (e.g., with a memreserve region in the device tree) will be +considered as available to the kernel. Before jumping into the kernel, the following conditions must be met: diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 3909a5fe7d7c..4ee01ebc4029 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -35,6 +35,7 @@ #include #include +#include #include #include #include @@ -157,6 +158,45 @@ static int __init early_mem(char *p) } early_param("mem", early_mem); +static void enforce_memory_limit(void) +{ + const phys_addr_t kbase = round_down(__pa(_text), MIN_KIMG_ALIGN); + u64 to_remove = memblock_phys_mem_size() - memory_limit; + phys_addr_t max_addr = 0; + struct memblock_region *r; + + if (memory_limit == (phys_addr_t)ULLONG_MAX) + return; + + /* + * The kernel may be high up in physical memory, so try to apply the + * limit below the kernel first, and only let the generic handling + * take over if it turns out we haven't clipped enough memory yet. + */ + for_each_memblock(memory, r) { + if (r->base + r->size > kbase) { + u64 rem = min(to_remove, kbase - r->base); + + max_addr = r->base + rem; + to_remove -= rem; + break; + } + if (to_remove <= r->size) { + max_addr = r->base + to_remove; + to_remove = 0; + break; + } + to_remove -= r->size; + } + + /* truncate both memory and reserved regions */ + memblock_remove_range(&memblock.memory, 0, max_addr); + memblock_remove_range(&memblock.reserved, 0, max_addr); + + if (to_remove) + memblock_enforce_memory_limit(memory_limit); +} + void __init arm64_memblock_init(void) { /* @@ -164,12 +204,11 @@ void __init arm64_memblock_init(void) * with the linear mapping. */ const s64 linear_region_size = -(s64)PAGE_OFFSET; - u64 dram_base = memstart_addr - KIMAGE_OFFSET; - memblock_remove(0, dram_base); - memblock_remove(dram_base + linear_region_size, ULLONG_MAX); + memblock_remove(round_down(memblock_start_of_DRAM(), SZ_1G) + + linear_region_size, ULLONG_MAX); - memblock_enforce_memory_limit(memory_limit); + enforce_memory_limit(); /* * Register the kernel text, kernel data, initrd, and initial diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 9c94c8c78da7..7e3e6af9b55c 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -432,11 +432,27 @@ static void __init bootstrap_linear_mapping(unsigned long va_offset) static void __init map_mem(void) { struct memblock_region *reg; + u64 new_memstart_addr; + u64 new_va_offset; - bootstrap_linear_mapping(KIMAGE_OFFSET); + /* + * Select a suitable value for the base of physical memory. + * This should be equal to or below the lowest usable physical + * memory address, and aligned to PUD/PMD size so that we can map + * it efficiently. + */ + new_memstart_addr = round_down(memblock_start_of_DRAM(), SZ_1G); + + /* + * Calculate the offset between the kernel text mapping that exists + * outside of the linear mapping, and its mapping in the linear region. + */ + new_va_offset = memstart_addr - new_memstart_addr; + + bootstrap_linear_mapping(new_va_offset); - kernel_va_offset = KIMAGE_OFFSET; - memstart_addr -= KIMAGE_OFFSET; + kernel_va_offset = new_va_offset; + memstart_addr = new_memstart_addr; /* map all the memory banks */ for_each_memblock(memory, reg) {