From patchwork Wed Sep 23 00:37:43 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 54019 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f72.google.com (mail-la0-f72.google.com [209.85.215.72]) by patches.linaro.org (Postfix) with ESMTPS id 7F7F122B1E for ; Wed, 23 Sep 2015 00:42:01 +0000 (UTC) Received: by lagj9 with SMTP id j9sf15171442lag.0 for ; Tue, 22 Sep 2015 17:42:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:cc:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=o3C0nAqV4P3TeQXV/vw5oIieWHwh9ZW95pES2rl29T4=; b=VLR4hurYvrtDG8e+zlMWc8DWtjKENTng4KrTUzu1yQN1W1+ytXGRMdn/TXno/6T/5c J/QCmtNqIqTPFWom6vJ3LITzr11fGKNaTrWxRYm4wJB+bbcn9+WvAEb3dbsIF1jy8fGK 5U1AE0sT5llUs/w1hcoRnMaKR+uS9JAE6nW+sAhlIpxCGB540tpYKMYk6vxBNw9YepEj YNGW1H8nXY0bCwVsxHK9Xg1PnbfrUmkhpunRqBDoxmicWckaLlyzCBTRqWQAOgYWG7LH eLDcLvIPbI08ZNOMjlKCgKOMq6Nn1KNqypCBB5ElNGIHi5lYjfLzr0uA9x1+hKWQPAVM JIXw== X-Gm-Message-State: ALoCoQmTpgOAJxfe+j9Emfh4NEcj1r88lpVYcjXuPjokziicv3FTNyEHACZUPVbL19/AZvNzQXQj X-Received: by 10.180.106.197 with SMTP id gw5mr89091wib.7.1442968920015; Tue, 22 Sep 2015 17:42:00 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.25.148.141 with SMTP id w135ls86435lfd.73.gmail; Tue, 22 Sep 2015 17:41:59 -0700 (PDT) X-Received: by 10.112.159.72 with SMTP id xa8mr10338740lbb.118.1442968919857; Tue, 22 Sep 2015 17:41:59 -0700 (PDT) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com. [209.85.215.54]) by mx.google.com with ESMTPS id zd5si635184lbb.139.2015.09.22.17.41.59 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 22 Sep 2015 17:41:59 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.54 as permitted sender) client-ip=209.85.215.54; Received: by lacao8 with SMTP id ao8so9657680lac.3 for ; Tue, 22 Sep 2015 17:41:59 -0700 (PDT) X-Received: by 10.152.18.167 with SMTP id x7mr73125lad.29.1442968919389; Tue, 22 Sep 2015 17:41:59 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.59.35 with SMTP id w3csp789990lbq; Tue, 22 Sep 2015 17:41:58 -0700 (PDT) X-Received: by 10.69.2.227 with SMTP id br3mr34698654pbd.9.1442968918219; Tue, 22 Sep 2015 17:41:58 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id gw3si6371173pac.79.2015.09.22.17.41.57 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 22 Sep 2015 17:41:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZeY6Q-0006a6-LO; Wed, 23 Sep 2015 00:40:06 +0000 Received: from mail-pa0-f44.google.com ([209.85.220.44]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZeY5N-0005RD-Ri for linux-arm-kernel@lists.infradead.org; Wed, 23 Sep 2015 00:39:03 +0000 Received: by pacfv12 with SMTP id fv12so24100139pac.2 for ; Tue, 22 Sep 2015 17:38:41 -0700 (PDT) X-Received: by 10.68.163.5 with SMTP id ye5mr33666477pbb.120.1442968721048; Tue, 22 Sep 2015 17:38:41 -0700 (PDT) Received: from localhost.localdomain ([70.35.39.2]) by smtp.gmail.com with ESMTPSA id ja4sm1927162pbb.19.2015.09.22.17.38.40 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 22 Sep 2015 17:38:40 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, mark.rutland@arm.com, will.deacon@arm.com, catalin.marinas@arm.com Subject: [PATCH v2 7/7] arm64: allow kernel Image to be loaded anywhere in physical memory Date: Tue, 22 Sep 2015 17:37:43 -0700 Message-Id: <1442968663-31843-8-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1442968663-31843-1-git-send-email-ard.biesheuvel@linaro.org> References: <1442968663-31843-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150922_173902_116239_75032ABE X-CRM114-Status: GOOD ( 24.45 ) X-Spam-Score: -2.6 (--) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-2.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.220.44 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [209.85.220.44 listed in wl.mailspike.net] -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Cc: Ard Biesheuvel MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.54 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 This relaxes the kernel Image placement requirements, so that it may be placed at any 2 MB aligned offset in physical memory. This is accomplished by ignoring PHYS_OFFSET when installing memblocks, and accounting for the apparent virtual offset of the kernel Image (in addition to the 64 MB that it is moved below PAGE_OFFSET). As a result, virtual address references below PAGE_OFFSET are correctly mapped onto physical references into the kernel Image regardless of where it sits in memory. Signed-off-by: Ard Biesheuvel --- Documentation/arm64/booting.txt | 12 ++--- arch/arm64/include/asm/memory.h | 8 ++- arch/arm64/mm/init.c | 51 +++++++++++++++++++- arch/arm64/mm/mmu.c | 30 ++++++++++-- 4 files changed, 86 insertions(+), 15 deletions(-) diff --git a/Documentation/arm64/booting.txt b/Documentation/arm64/booting.txt index 7d9d3c2286b2..baf207acd6dd 100644 --- a/Documentation/arm64/booting.txt +++ b/Documentation/arm64/booting.txt @@ -112,14 +112,14 @@ Header notes: depending on selected features, and is effectively unbound. The Image must be placed text_offset bytes from a 2MB aligned base -address near the start of usable system RAM and called there. Memory -below that base address is currently unusable by Linux, and therefore it -is strongly recommended that this location is the start of system RAM. -The region between the 2 MB aligned base address and the start of the -image has no special significance to the kernel, and may be used for -other purposes. +address anywhere in usable system RAM and called there. The region +between the 2 MB aligned base address and the start of the image has no +special significance to the kernel, and may be used for other purposes. At least image_size bytes from the start of the image must be free for use by the kernel. +NOTE: versions prior to v4.4 cannot make use of memory below the +physical offset of the Image so it is recommended that the Image be +placed as close as possible to the start of system RAM. Any memory described to the kernel (even that below the start of the image) which is not marked as reserved from the kernel (e.g., with a diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index bdea5b4c7be9..598661b268cc 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -121,12 +121,10 @@ extern phys_addr_t memstart_addr; extern u64 kernel_va_offset; /* - * The maximum physical address that the linear direct mapping - * of system RAM can cover. (PAGE_OFFSET can be interpreted as - * a 2's complement signed quantity and negated to derive the - * maximum size of the linear mapping.) + * Allow all memory at the discovery stage. We will clip it later. */ -#define MAX_MEMBLOCK_ADDR ({ memstart_addr - PAGE_OFFSET - 1; }) +#define MIN_MEMBLOCK_ADDR 0 +#define MAX_MEMBLOCK_ADDR U64_MAX /* * PFNs are used to describe any physical page; this means diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index b9390eb1e29f..d3abc3555623 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -35,6 +35,7 @@ #include #include +#include #include #include #include @@ -157,9 +158,57 @@ static int __init early_mem(char *p) } early_param("mem", early_mem); +static void enforce_memory_limit(void) +{ + const phys_addr_t kbase = round_down(__pa(_text), MIN_KIMG_ALIGN); + u64 to_remove = memblock_phys_mem_size() - memory_limit; + phys_addr_t max_addr = 0; + struct memblock_region *r; + + if (memory_limit == (phys_addr_t)ULLONG_MAX) + return; + + /* + * The kernel may be high up in physical memory, so try to apply the + * limit below the kernel first, and only let the generic handling + * take over if it turns out we haven't clipped enough memory yet. + */ + for_each_memblock(memory, r) { + if (r->base + r->size > kbase) { + u64 rem = min(to_remove, kbase - r->base); + + max_addr = r->base + rem; + to_remove -= rem; + break; + } + if (to_remove <= r->size) { + max_addr = r->base + to_remove; + to_remove = 0; + break; + } + to_remove -= r->size; + } + + /* truncate both memory and reserved regions */ + memblock_remove_range(&memblock.memory, 0, max_addr); + memblock_remove_range(&memblock.reserved, 0, max_addr); + + if (to_remove) + memblock_enforce_memory_limit(memory_limit); +} + void __init arm64_memblock_init(void) { - memblock_enforce_memory_limit(memory_limit); + /* + * Remove the memory that we will not be able to cover + * with the linear mapping. + */ + const s64 linear_region_size = -(s64)PAGE_OFFSET; + + memblock_remove(round_down(memblock_start_of_DRAM(), SZ_1G) + + linear_region_size, ULLONG_MAX); + + enforce_memory_limit(); /* * Register the kernel text, kernel data, initrd, and initial diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 4a1c9d0769f2..675757c01eff 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -432,11 +433,34 @@ static void __init bootstrap_linear_mapping(unsigned long va_offset) static void __init map_mem(void) { struct memblock_region *reg; + u64 new_memstart_addr; + u64 new_va_offset; - bootstrap_linear_mapping(KIMAGE_OFFSET); + /* + * Select a suitable value for the base of physical memory. + * This should be equal to or below the lowest usable physical + * memory address, and aligned to PUD/PMD size so that we can map + * it efficiently. + */ + new_memstart_addr = round_down(memblock_start_of_DRAM(), SZ_1G); + + /* + * Calculate the offset between the kernel text mapping that exists + * outside of the linear mapping, and its mapping in the linear region. + */ + new_va_offset = memstart_addr - new_memstart_addr; + + bootstrap_linear_mapping(new_va_offset); + + kernel_va_offset = new_va_offset; + + /* Recalculate virtual addresses of initrd region */ + if (initrd_start) { + initrd_start += new_va_offset; + initrd_end += new_va_offset; + } - kernel_va_offset = KIMAGE_OFFSET; - memstart_addr -= KIMAGE_OFFSET; + memstart_addr = new_memstart_addr; /* map all the memory banks */ for_each_memblock(memory, reg) {