From patchwork Mon Apr 18 15:09:48 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 66040 Delivered-To: patch@linaro.org Received: by 10.140.93.198 with SMTP id d64csp1340132qge; Mon, 18 Apr 2016 08:10:43 -0700 (PDT) X-Received: by 10.98.86.25 with SMTP id k25mr15241303pfb.164.1460992240240; Mon, 18 Apr 2016 08:10:40 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e18si4016560pag.116.2016.04.18.08.10.39; Mon, 18 Apr 2016 08:10:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752677AbcDRPKh (ORCPT + 29 others); Mon, 18 Apr 2016 11:10:37 -0400 Received: from mail-wm0-f45.google.com ([74.125.82.45]:35348 "EHLO mail-wm0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751015AbcDRPKV (ORCPT ); Mon, 18 Apr 2016 11:10:21 -0400 Received: by mail-wm0-f45.google.com with SMTP id a140so124798733wma.0 for ; Mon, 18 Apr 2016 08:10:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+p0OgbEjYoU1cFCjgi3o0j9lfnSKybvG7oA9cGt6ZXE=; b=cplGGBxA82KxtEoLaJJyKag8qRe7RejYSVhKsxBM6sFopOyOiDT1VBMwCQ4qxzUhA+ htenQiGa/NLG8ABxBmGFMWFucaFLENQAQru8WyiaudXqy6X6fBJoU5DVPQ1l3rj0wi0w Ww6fkRj+PQwL4y6Llubr3S57nF2h/QTuCEK5A= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+p0OgbEjYoU1cFCjgi3o0j9lfnSKybvG7oA9cGt6ZXE=; b=iOHGw1CuJVQ+va1u9/cvlc5IWrh7Q4MO4/YtfJx2ecCOzTV4Tp9/Oc3AMGHKTIChrd +U+gmXGCn1JffpKexXp3SpGGrh/lcO1RkBUZnfWsE9TPEG/NnKSX9Dapf2U53BH3Lm2M KcuAOuuIdZ29SrDqKTsbDLejE+xc3DWg75V4ibrbiJEEz3a1fjYyGckhEeKGd7PVRzyN ZiWH5cW/FbxiLzou9Ou90pynDUfttZM49FcDI+xhMh1695ru5EQmfr7yw+gZWYhSkvM8 cwLzJ9cTNdUTsL0XfDowabCzZgFJCZc8zKujFd9SaIMTtkM9HS2NWOXcpcWcqC4iqtXR Cb6g== X-Gm-Message-State: AOPr4FVfHkJhUOlhjWRONRqKFOdGP9Sy42aPyJsKT/m3zufPXH+PgDYktBEvr08TZ14ngOBO X-Received: by 10.28.228.68 with SMTP id b65mr18773407wmh.47.1460992219733; Mon, 18 Apr 2016 08:10:19 -0700 (PDT) Received: from localhost.localdomain ([195.55.142.58]) by smtp.gmail.com with ESMTPSA id b135sm54232075wmb.10.2016.04.18.08.10.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 18 Apr 2016 08:10:19 -0700 (PDT) From: Ard Biesheuvel To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, will.deacon@arm.com, mark.rutland@arm.com, james.morse@arm.com Cc: catalin.marinas@arm.com, Ard Biesheuvel Subject: [PATCH 8/8] arm64: kaslr: increase randomization granularity Date: Mon, 18 Apr 2016 17:09:48 +0200 Message-Id: <1460992188-23295-9-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1460992188-23295-1-git-send-email-ard.biesheuvel@linaro.org> References: <1460992188-23295-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, our KASLR implementation randomizes the placement of the core kernel at 2 MB granularity. This is based on the arm64 kernel boot protocol, which mandates that the kernel is loaded TEXT_OFFSET bytes above a 2 MB aligned base address. This requirement is a result of the fact that the block size used by the early mapping code may be 2 MB at the most (for a 4 KB granule kernel) But we can do better than that: since a KASLR kernel needs to be relocated in any case, we can tolerate a physical misalignment as long as the virtual misalignment relative to this 2 MB block size is equal in size, and code to deal with this is already in place. Since we align the kernel segments to 64 KB, let's randomize the physical offset at 64 KB granularity as well (unless CONFIG_DEBUG_ALIGN_RODATA is enabled). This way, the page table and TLB footprint is not affected. The higher granularity allows for 5 bits of additional entropy to be used. Signed-off-by: Ard Biesheuvel --- drivers/firmware/efi/libstub/arm64-stub.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) -- 2.5.0 diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c index a90f6459f5c6..eae693eb3e91 100644 --- a/drivers/firmware/efi/libstub/arm64-stub.c +++ b/drivers/firmware/efi/libstub/arm64-stub.c @@ -81,15 +81,24 @@ efi_status_t handle_kernel_image(efi_system_table_t *sys_table_arg, if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && phys_seed != 0) { /* + * If CONFIG_DEBUG_ALIGN_RODATA is not set, produce a + * displacement in the interval [0, MIN_KIMG_ALIGN) that + * is a multiple of the minimal segment alignment (SZ_64K) + */ + u32 mask = (MIN_KIMG_ALIGN - 1) & ~(SZ_64K - 1); + u32 offset = !IS_ENABLED(CONFIG_DEBUG_ALIGN_RODATA) ? + (phys_seed >> 32) & mask : TEXT_OFFSET; + + /* * If KASLR is enabled, and we have some randomness available, * locate the kernel at a randomized offset in physical memory. */ - *reserve_size = kernel_memsize + TEXT_OFFSET; + *reserve_size = kernel_memsize + offset; status = efi_random_alloc(sys_table_arg, *reserve_size, MIN_KIMG_ALIGN, reserve_addr, - phys_seed); + (u32)phys_seed); - *image_addr = *reserve_addr + TEXT_OFFSET; + *image_addr = *reserve_addr + offset; } else { /* * Else, try a straight allocation at the preferred offset.