From patchwork Thu Jan 28 08:54:09 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 60691 Delivered-To: patch@linaro.org Received: by 10.112.130.2 with SMTP id oa2csp344263lbb; Thu, 28 Jan 2016 00:54:20 -0800 (PST) X-Received: by 10.98.12.29 with SMTP id u29mr2816869pfi.116.1453971260623; Thu, 28 Jan 2016 00:54:20 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 81si15559830pfi.155.2016.01.28.00.54.20; Thu, 28 Jan 2016 00:54:20 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dkim=neutral (body hash did not verify) header.i=@linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933533AbcA1IyT (ORCPT + 2 others); Thu, 28 Jan 2016 03:54:19 -0500 Received: from mail-wm0-f46.google.com ([74.125.82.46]:35071 "EHLO mail-wm0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932999AbcA1IyS (ORCPT ); Thu, 28 Jan 2016 03:54:18 -0500 Received: by mail-wm0-f46.google.com with SMTP id r129so14383980wmr.0 for ; Thu, 28 Jan 2016 00:54:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=qexX/PAgYBZ9WjPgN+Zerbix+TSjXszMHdsd3vNxPes=; b=fkdo/KC/mXoPvG8cdwK3pZsMZQ4pqoCh4M6VbJABO3Dy7MjvFmJZdryfs3JAUpKrxg TShVhS2LzHiM5E1CmcK7KZOML0yF2G9c8if8riWmgV12yS36qem/Kui9Ol3IukNla0I2 y2/NtkaPO6XwoVtjZN4rYPh/GRVYv6WH1JsN0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=qexX/PAgYBZ9WjPgN+Zerbix+TSjXszMHdsd3vNxPes=; b=EuEzUoIVYR2weNCECbt6unk/GFzxXbxJ6R8XiyjPbhmeb0mSpqwdWLaJjaxk6APdA7 Za9xqTd5G9EaDOj2Dy7b6svXB0nnEBDd4WRFvcKFIgW0oh1c5uKzyHI28XFgsTOAetnN LTTEMNmilRzyiVNSIHj4ACpsZ7VEneBiiXDjM2Ew3fzbVliSw4G2TtxLoosSs5jpkXCk /F7F1xucyxHit2gWn5T5X439tAnn3p051uHKDNaNsLdsEgCYlvalLrsfCXCqvbH0ByDO cw2qXDhI70G6G4V46jeq+Hfxw7ogwSLCIaVs/+6TBC0wjJF7r2lT4bA2djEoFivaXkP7 faLQ== X-Gm-Message-State: AG10YOQzwJwvPH+XiF/7AMd1eAE+bDpjl4YgiFCFs7BqeEy8gLc2AgEsHgybB75S1NI83s4s X-Received: by 10.194.114.106 with SMTP id jf10mr2163757wjb.149.1453971257531; Thu, 28 Jan 2016 00:54:17 -0800 (PST) Received: from localhost.localdomain ([195.55.142.58]) by smtp.gmail.com with ESMTPSA id a126sm1812291wmh.0.2016.01.28.00.54.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 28 Jan 2016 00:54:16 -0800 (PST) From: Ard Biesheuvel To: stable@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Catalin Marinas Subject: [PATCH] arm64: mm: use correct mapping granularity under DEBUG_RODATA Date: Thu, 28 Jan 2016 09:54:09 +0100 Message-Id: <1453971249-4407-1-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.5.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org (cherry picked from commit 4fee9f364b9b99f76732f2a6fd6df679a237fa74) When booting a 64k pages kernel that is built with CONFIG_DEBUG_RODATA and resides at an offset that is not a multiple of 512 MB, the rounding that occurs in __map_memblock() and fixup_executable() results in incorrect regions being mapped. The following snippet from /sys/kernel/debug/kernel_page_tables shows how, when the kernel is loaded 2 MB above the base of DRAM at 0x40000000, the first 2 MB of memory (which may be inaccessible from non-secure EL1 or just reserved by the firmware) is inadvertently mapped into the end of the module region. ---[ Modules start ]--- 0xfffffdffffe00000-0xfffffe0000000000 2M RW NX ... UXN MEM/NORMAL ---[ Modules end ]--- ---[ Kernel Mapping ]--- 0xfffffe0000000000-0xfffffe0000090000 576K RW NX ... UXN MEM/NORMAL 0xfffffe0000090000-0xfffffe0000200000 1472K ro x ... UXN MEM/NORMAL 0xfffffe0000200000-0xfffffe0000800000 6M ro x ... UXN MEM/NORMAL 0xfffffe0000800000-0xfffffe0000810000 64K ro x ... UXN MEM/NORMAL 0xfffffe0000810000-0xfffffe0000a00000 1984K RW NX ... UXN MEM/NORMAL 0xfffffe0000a00000-0xfffffe00ffe00000 4084M RW NX ... UXN MEM/NORMAL The same issue is likely to occur on 16k pages kernels whose load address is not a multiple of 32 MB (i.e., SECTION_SIZE). So round to SWAPPER_BLOCK_SIZE instead of SECTION_SIZE. Fixes: da141706aea5 ("arm64: add better page protections to arm64") Signed-off-by: Ard Biesheuvel Acked-by: Mark Rutland Acked-by: Laura Abbott Cc: # 4.0+ Signed-off-by: Catalin Marinas [ard.biesheuvel: add #define of SWAPPER_BLOCK_SIZE for -stable version] Signed-off-by: Ard Biesheuvel --- arch/arm64/mm/mmu.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) -- 2.5.0 -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 5b8b664422d3..3b85ced2668a 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -300,6 +300,7 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt, } #ifdef CONFIG_DEBUG_RODATA +#define SWAPPER_BLOCK_SIZE (PAGE_SHIFT == 12 ? SECTION_SIZE : PAGE_SIZE) static void __init __map_memblock(phys_addr_t start, phys_addr_t end) { /* @@ -307,8 +308,8 @@ static void __init __map_memblock(phys_addr_t start, phys_addr_t end) * for now. This will get more fine grained later once all memory * is mapped */ - unsigned long kernel_x_start = round_down(__pa(_stext), SECTION_SIZE); - unsigned long kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE); + unsigned long kernel_x_start = round_down(__pa(_stext), SWAPPER_BLOCK_SIZE); + unsigned long kernel_x_end = round_up(__pa(__init_end), SWAPPER_BLOCK_SIZE); if (end < kernel_x_start) { create_mapping(start, __phys_to_virt(start), @@ -396,18 +397,18 @@ void __init fixup_executable(void) { #ifdef CONFIG_DEBUG_RODATA /* now that we are actually fully mapped, make the start/end more fine grained */ - if (!IS_ALIGNED((unsigned long)_stext, SECTION_SIZE)) { + if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) { unsigned long aligned_start = round_down(__pa(_stext), - SECTION_SIZE); + SWAPPER_BLOCK_SIZE); create_mapping(aligned_start, __phys_to_virt(aligned_start), __pa(_stext) - aligned_start, PAGE_KERNEL); } - if (!IS_ALIGNED((unsigned long)__init_end, SECTION_SIZE)) { + if (!IS_ALIGNED((unsigned long)__init_end, SWAPPER_BLOCK_SIZE)) { unsigned long aligned_end = round_up(__pa(__init_end), - SECTION_SIZE); + SWAPPER_BLOCK_SIZE); create_mapping(__pa(__init_end), (unsigned long)__init_end, aligned_end - __pa(__init_end), PAGE_KERNEL);