From patchwork Mon Apr 18 15:09:47 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 66039 Delivered-To: patch@linaro.org Received: by 10.140.93.198 with SMTP id d64csp1340112qge; Mon, 18 Apr 2016 08:10:40 -0700 (PDT) X-Received: by 10.66.136.235 with SMTP id qd11mr49799113pab.140.1460992239889; Mon, 18 Apr 2016 08:10:39 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e18si4016560pag.116.2016.04.18.08.10.39; Mon, 18 Apr 2016 08:10:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752625AbcDRPKW (ORCPT + 29 others); Mon, 18 Apr 2016 11:10:22 -0400 Received: from mail-wm0-f42.google.com ([74.125.82.42]:36064 "EHLO mail-wm0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752573AbcDRPKS (ORCPT ); Mon, 18 Apr 2016 11:10:18 -0400 Received: by mail-wm0-f42.google.com with SMTP id v188so124168791wme.1 for ; Mon, 18 Apr 2016 08:10:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=N2/0aTphfP05jPl1xQQ3iZGEaCGipdgat9vdVKH7FlQ=; b=bjzod5+oQm3x3JWdg8HqQL0uL92N+MghNHNoNfeWwOaXVv3XP0PiKgIVTIjFvpvCNI WJwtHWpwkmr7acYbaPiEtHZiRf2kI6QkRU9jiOCEA3PBi+CzOZC1QNrjrTAFMBmd3L2q rxAvI3nUQ25b0wvZKocGtgdFi/ldPo3LTQFn4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=N2/0aTphfP05jPl1xQQ3iZGEaCGipdgat9vdVKH7FlQ=; b=B4SmL1b//FxRBPlvEH653aU7KywTrAl/Wk3E8mOKuzIOerLYep6hEOx9E58zJNezNg FIaR8gBRStjo6WVGfsh+D7c75K+gUf4Xo7voaaAW2tao3PwffhetFbZbvgcK89gSXJGO w6hVsp8zJLmg7KwV1wcXQiNf2BMhxQK+EOTgL4w7mQqypI4c1jRk6OIcXhukwkZekiIt Oot2VS1Fc83GdI8I2VG3VgAkFD5KMVGcm7i5+pG58eiqS6ioJXy+UTZJJInbMgclcwyV gPlYJYPc/ZUfR4r5HcfqQzhUPw5chwg2vrSRrqSCYbdboSRGUriyDXlWv/mmW+c4G1k4 mFqQ== X-Gm-Message-State: AOPr4FXvQphxULcaO6cxRakd/oxg0ixVxO+YmWupfgQv2iIrM0bR8ughXrA/VHxlU4LeTom5 X-Received: by 10.28.129.208 with SMTP id c199mr19813800wmd.75.1460992217007; Mon, 18 Apr 2016 08:10:17 -0700 (PDT) Received: from localhost.localdomain ([195.55.142.58]) by smtp.gmail.com with ESMTPSA id b135sm54232075wmb.10.2016.04.18.08.10.14 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 18 Apr 2016 08:10:16 -0700 (PDT) From: Ard Biesheuvel To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, will.deacon@arm.com, mark.rutland@arm.com, james.morse@arm.com Cc: catalin.marinas@arm.com, Ard Biesheuvel Subject: [PATCH 7/8] arm64: relocatable: deal with physically misaligned kernel images Date: Mon, 18 Apr 2016 17:09:47 +0200 Message-Id: <1460992188-23295-8-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1460992188-23295-1-git-send-email-ard.biesheuvel@linaro.org> References: <1460992188-23295-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When booting a relocatable kernel image, there is no practical reason to refuse an image whose load address is not exactly TEXT_OFFSET bytes above a 2 MB aligned base address, as long as the physical and virtual misalignment with respect to the swapper block size are equal, and are both aligned to THREAD_SIZE. Since the virtual misalignment is under our control when we first enter the kernel proper, we can simply choose its value to be equal to the physical misalignment. So treat the misalignment of the physical load address as the initial KASLR offset, and fix up the remaining code to deal with that. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 9 ++++++--- arch/arm64/kernel/kaslr.c | 6 +++--- 2 files changed, 9 insertions(+), 6 deletions(-) -- 2.5.0 diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index c5e5edca6897..00a32101ab51 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -25,6 +25,7 @@ #include #include +#include #include #include #include @@ -213,8 +214,8 @@ efi_header_end: ENTRY(stext) bl preserve_boot_args bl el2_setup // Drop to EL1, w20=cpu_boot_mode - mov x23, xzr // KASLR offset, defaults to 0 adrp x24, __PHYS_OFFSET + and x23, x24, MIN_KIMG_ALIGN - 1 // KASLR offset, defaults to 0 bl set_cpu_boot_mode_flag bl __create_page_tables // x25=TTBR0, x26=TTBR1 /* @@ -449,11 +450,13 @@ __primary_switched: bl kasan_early_init #endif #ifdef CONFIG_RANDOMIZE_BASE - cbnz x23, 0f // already running randomized? + tst x23, ~(MIN_KIMG_ALIGN - 1) // already running randomized? + b.ne 0f mov x0, x21 // pass FDT address in x0 + mov x1, x23 // pass modulo offset in x1 bl kaslr_early_init // parse FDT for KASLR options cbz x0, 0f // KASLR disabled? just proceed - mov x23, x0 // record KASLR offset + orr x23, x23, x0 // record KASLR offset ret x28 // we must enable KASLR, return // to __enable_mmu() 0: diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c index 582983920054..b05469173ba5 100644 --- a/arch/arm64/kernel/kaslr.c +++ b/arch/arm64/kernel/kaslr.c @@ -74,7 +74,7 @@ extern void *__init __fixmap_remap_fdt(phys_addr_t dt_phys, int *size, * containing function pointers) to be reinitialized, and zero-initialized * .bss variables will be reset to 0. */ -u64 __init kaslr_early_init(u64 dt_phys) +u64 __init kaslr_early_init(u64 dt_phys, u64 modulo_offset) { void *fdt; u64 seed, offset, mask, module_range; @@ -132,8 +132,8 @@ u64 __init kaslr_early_init(u64 dt_phys) * boundary (for 4KB/16KB/64KB granule kernels, respectively). If this * happens, increase the KASLR offset by the size of the kernel image. */ - if ((((u64)_text + offset) >> SWAPPER_TABLE_SHIFT) != - (((u64)_end + offset) >> SWAPPER_TABLE_SHIFT)) + if ((((u64)_text + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT) != + (((u64)_end + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT)) offset = (offset + (u64)(_end - _text)) & mask; if (IS_ENABLED(CONFIG_KASAN))