From patchwork Fri Apr 10 13:53:54 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 47051 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f200.google.com (mail-lb0-f200.google.com [209.85.217.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 6D52E218D9 for ; Fri, 10 Apr 2015 14:04:22 +0000 (UTC) Received: by lbdc7 with SMTP id c7sf2708190lbd.2 for ; Fri, 10 Apr 2015 07:04:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=GY7nVz7qw+FGQQTODWPHZCNJm3hdvLv6IMLPG40TxTU=; b=aWfbiySyiOr0O4PJt9WdnS14vOqTXpREaHeQ8Yh5gQh0LvDXjfDTN7snhw4QcFmCWr zb2tOkSet0wK5soDEMp5/O3n95uWmJmm2egQbTLvIgmmUFCTsyS6z5ZU10h6yG3BnNfp quQYluOjZ1smTwaY+ZHoC87leIugIVWfcemKwe75tfUnqeUlgLFbgzLAzYRq/dVOYgVV jIbfGsd9uxF8mNWsL+M6z+GiHtFl8LR4N4Ud56CRSVeu9BgL1OCEw/eYxl8WogyZuf2t TNkgh1C4JPE08s3x/voP3KbGFuijmadGO1MeBD+kJgqTwiHDmc/PRarI5OP9SmzvyFAq XeXA== X-Gm-Message-State: ALoCoQluT8p+Z2AEpQscnGwKkim/S4WxDMvx/ywIamezT+Hjib3uLb33itbzksyz7Rx20X8EIkY7 X-Received: by 10.180.81.134 with SMTP id a6mr530545wiy.1.1428674661363; Fri, 10 Apr 2015 07:04:21 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.45.6 with SMTP id i6ls454170lam.10.gmail; Fri, 10 Apr 2015 07:04:21 -0700 (PDT) X-Received: by 10.152.37.228 with SMTP id b4mr1553173lak.111.1428674661205; Fri, 10 Apr 2015 07:04:21 -0700 (PDT) Received: from mail-la0-f42.google.com (mail-la0-f42.google.com. [209.85.215.42]) by mx.google.com with ESMTPS id n1si1576918lbc.29.2015.04.10.07.04.21 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 10 Apr 2015 07:04:21 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.42 as permitted sender) client-ip=209.85.215.42; Received: by labbd9 with SMTP id bd9so14011740lab.2 for ; Fri, 10 Apr 2015 07:04:20 -0700 (PDT) X-Received: by 10.112.16.196 with SMTP id i4mr1599213lbd.72.1428674660930; Fri, 10 Apr 2015 07:04:20 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp1122508lbt; Fri, 10 Apr 2015 07:04:20 -0700 (PDT) X-Received: by 10.70.45.16 with SMTP id i16mr2937927pdm.51.1428674657415; Fri, 10 Apr 2015 07:04:17 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id sa3si3129363pac.27.2015.04.10.07.04.16 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 10 Apr 2015 07:04:17 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YgZV1-0004wy-3V; Fri, 10 Apr 2015 14:01:35 +0000 Received: from mail-wi0-f182.google.com ([209.85.212.182]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YgZOT-0006KU-Du for linux-arm-kernel@lists.infradead.org; Fri, 10 Apr 2015 13:54:50 +0000 Received: by wiax7 with SMTP id x7so16750997wia.0 for ; Fri, 10 Apr 2015 06:54:27 -0700 (PDT) X-Received: by 10.194.21.193 with SMTP id x1mr3292103wje.144.1428674067296; Fri, 10 Apr 2015 06:54:27 -0700 (PDT) Received: from ards-macbook-pro.local ([84.78.25.50]) by mx.google.com with ESMTPSA id e2sm3051482wjy.46.2015.04.10.06.54.25 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 10 Apr 2015 06:54:26 -0700 (PDT) From: Ard Biesheuvel To: mark.rutland@arm.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org Subject: [PATCH v3 10/11] arm64: allow kernel Image to be loaded anywhere in physical memory Date: Fri, 10 Apr 2015 15:53:54 +0200 Message-Id: <1428674035-26603-11-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1428674035-26603-1-git-send-email-ard.biesheuvel@linaro.org> References: <1428674035-26603-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150410_065449_645706_AD5C9D90 X-CRM114-Status: GOOD ( 17.85 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.212.182 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [209.85.212.182 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: Ard Biesheuvel X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.42 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 This relaxes the kernel Image placement requirements, so that it may be placed at any 2 MB aligned offset in physical memory. This is accomplished by ignoring PHYS_OFFSET when installing memblocks, and accounting for the apparent virtual offset of the kernel Image (in addition to the 64 MB that is is moved below PAGE_OFFSET). As a result, virtual address references below PAGE_OFFSET are correctly mapped onto physical references into the kernel Image regardless of where it sits in memory. Signed-off-by: Ard Biesheuvel --- Documentation/arm64/booting.txt | 17 +++++++---------- arch/arm64/mm/init.c | 32 +++++++++++++++++++------------- 2 files changed, 26 insertions(+), 23 deletions(-) diff --git a/Documentation/arm64/booting.txt b/Documentation/arm64/booting.txt index 6396460f6085..811d93548bdc 100644 --- a/Documentation/arm64/booting.txt +++ b/Documentation/arm64/booting.txt @@ -110,16 +110,13 @@ Header notes: depending on selected features, and is effectively unbound. The Image must be placed text_offset bytes from a 2MB aligned base -address near the start of usable system RAM and called there. Memory -below that base address is currently unusable by Linux, and therefore it -is strongly recommended that this location is the start of system RAM. -At least image_size bytes from the start of the image must be free for -use by the kernel. - -Any memory described to the kernel (even that below the 2MB aligned base -address) which is not marked as reserved from the kernel e.g. with a -memreserve region in the device tree) will be considered as available to -the kernel. +address anywhere in usable system RAM and called there. At least +image_size bytes from the start of the image must be free for use +by the kernel. + +Any memory described to the kernel which is not marked as reserved from +the kernel e.g. with a memreserve region in the device tree) will be +considered as available to the kernel. Before jumping into the kernel, the following conditions must be met: diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 48175b769074..18234c7cf6e6 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -375,8 +375,6 @@ __setup("keepinitrd", keepinitrd_setup); void __init early_init_dt_add_memory_arch(u64 base, u64 size) { - const u64 phys_offset = __pa(PAGE_OFFSET); - if (!PAGE_ALIGNED(base)) { if (size < PAGE_SIZE - (base & ~PAGE_MASK)) { pr_warn("Ignoring memory block 0x%llx - 0x%llx\n", @@ -388,16 +386,24 @@ void __init early_init_dt_add_memory_arch(u64 base, u64 size) } size &= PAGE_MASK; - if (base + size < phys_offset) { - pr_warning("Ignoring memory block 0x%llx - 0x%llx\n", - base, base + size); - return; - } - if (base < phys_offset) { - pr_warning("Ignoring memory range 0x%llx - 0x%llx\n", - base, phys_offset); - size -= phys_offset - base; - base = phys_offset; - } memblock_add(base, size); + + /* + * Set memstart_addr to the base of the lowest physical memory region, + * rounded down to PUD/PMD alignment so we can map it efficiently. + * Since this also affects the apparent offset of the kernel image in + * the virtual address space, increase image_offset by the same amount + * that we decrease memstart_addr. + */ + if (!memstart_addr || memstart_addr > base) { + u64 new_memstart_addr; + + if (IS_ENABLED(CONFIG_ARM64_64K_PAGES)) + new_memstart_addr = base & PMD_MASK; + else + new_memstart_addr = base & PUD_MASK; + + image_offset += memstart_addr - new_memstart_addr; + memstart_addr = new_memstart_addr; + } }