From patchwork Wed Apr 15 15:34:22 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 47209 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f198.google.com (mail-wi0-f198.google.com [209.85.212.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id E0AEB2121F for ; Wed, 15 Apr 2015 15:42:46 +0000 (UTC) Received: by widjs5 with SMTP id js5sf12429881wid.3 for ; Wed, 15 Apr 2015 08:42:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=PkV4ZYhwc8Bsa/dIp7Tpw8fYwhyPAFYlaAR9VkLGiBg=; b=M2c98u5kuqZB1guUL6mB85a3oRakdfh9+q75yATWFeQQ0aKBO++foUf/x+QjToz5A+ +vUVvcPPUjK+0rCbkexnXk8gfECe3YR1iUgY89ht0fDMafdiHTHBmq7OKXuimfAqUXK6 xZNjL0w0LSjV+c3F0vbnWA1JqmNqiPh1fdigLkFZ6fN9KEWGa5f6IcLrwEEsvoIbQRfO x0KngYeNh6NTs04ou1xD5CuNbyxIxbQXpZZFhwVuUTm/N8P4Y+C9kLuz3UQ2/5piUQtm NqLP6Kc/opkdLAp+Ixz4NVf4z32PBSBZ0tmRz0BZ9byXY++ie7T1z2x6TAvNV5M1m46S 57tw== X-Gm-Message-State: ALoCoQk8/j1Rr3/VEXDySiFjGQzXlVSaJPkboNbR4SgIEnJX0+yvviEJLhlHfXbE1cB2FqFntVfZ X-Received: by 10.194.53.225 with SMTP id e1mr5626226wjp.4.1429112565260; Wed, 15 Apr 2015 08:42:45 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.10.208 with SMTP id k16ls214255lab.42.gmail; Wed, 15 Apr 2015 08:42:45 -0700 (PDT) X-Received: by 10.112.83.135 with SMTP id q7mr24607432lby.13.1429112565105; Wed, 15 Apr 2015 08:42:45 -0700 (PDT) Received: from mail-la0-f51.google.com (mail-la0-f51.google.com. [209.85.215.51]) by mx.google.com with ESMTPS id q1si4176007lbr.116.2015.04.15.08.42.45 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 15 Apr 2015 08:42:45 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.51 as permitted sender) client-ip=209.85.215.51; Received: by labbd9 with SMTP id bd9so35904150lab.2 for ; Wed, 15 Apr 2015 08:42:45 -0700 (PDT) X-Received: by 10.152.163.35 with SMTP id yf3mr24436890lab.86.1429112565016; Wed, 15 Apr 2015 08:42:45 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp2594391lbt; Wed, 15 Apr 2015 08:42:44 -0700 (PDT) X-Received: by 10.67.14.73 with SMTP id fe9mr47645322pad.10.1429112563212; Wed, 15 Apr 2015 08:42:43 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id qg4si191262pbb.176.2015.04.15.08.42.42 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 15 Apr 2015 08:42:43 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YiPQg-0005JU-71; Wed, 15 Apr 2015 15:40:42 +0000 Received: from mail-wi0-f171.google.com ([209.85.212.171]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YiPMf-0001BA-Nf for linux-arm-kernel@lists.infradead.org; Wed, 15 Apr 2015 15:36:34 +0000 Received: by wizk4 with SMTP id k4so159918510wiz.1 for ; Wed, 15 Apr 2015 08:36:10 -0700 (PDT) X-Received: by 10.194.222.135 with SMTP id qm7mr52055061wjc.14.1429112170914; Wed, 15 Apr 2015 08:36:10 -0700 (PDT) Received: from ards-macbook-pro.local ([90.174.5.175]) by mx.google.com with ESMTPSA id eh5sm7674765wic.20.2015.04.15.08.36.04 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 15 Apr 2015 08:36:10 -0700 (PDT) From: Ard Biesheuvel To: mark.rutland@arm.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 11/13] arm64: map linear region as non-executable Date: Wed, 15 Apr 2015 17:34:22 +0200 Message-Id: <1429112064-19952-12-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1429112064-19952-1-git-send-email-ard.biesheuvel@linaro.org> References: <1429112064-19952-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150415_083633_962402_68D62A67 X-CRM114-Status: GOOD ( 13.39 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.212.171 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [209.85.212.171 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: Ard Biesheuvel X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.51 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Now that we moved the kernel text out of the linear region, there is no longer a reason to map it as executable. This also allows us to completely get rid of the __map_mem() variant that only maps some of it executable if CONFIG_DEBUG_RODATA is selected. Signed-off-by: Ard Biesheuvel --- arch/arm64/mm/mmu.c | 41 ++--------------------------------------- 1 file changed, 2 insertions(+), 39 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index b457b7e425cc..c07ba8bdd8ed 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -303,47 +303,10 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt, phys, virt, size, prot, late_alloc); } -#ifdef CONFIG_DEBUG_RODATA static void __init __map_memblock(phys_addr_t start, phys_addr_t end) { - /* - * Set up the executable regions using the existing section mappings - * for now. This will get more fine grained later once all memory - * is mapped - */ - unsigned long kernel_x_start = round_down(__pa(_stext), SECTION_SIZE); - unsigned long kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE); - - if (end < kernel_x_start) { - create_mapping(start, __phys_to_virt(start), - end - start, PAGE_KERNEL); - } else if (start >= kernel_x_end) { - create_mapping(start, __phys_to_virt(start), - end - start, PAGE_KERNEL); - } else { - if (start < kernel_x_start) - create_mapping(start, __phys_to_virt(start), - kernel_x_start - start, - PAGE_KERNEL); - create_mapping(kernel_x_start, - __phys_to_virt(kernel_x_start), - kernel_x_end - kernel_x_start, - PAGE_KERNEL_EXEC); - if (kernel_x_end < end) - create_mapping(kernel_x_end, - __phys_to_virt(kernel_x_end), - end - kernel_x_end, - PAGE_KERNEL); - } - -} -#else -static void __init __map_memblock(phys_addr_t start, phys_addr_t end) -{ - create_mapping(start, __phys_to_virt(start), end - start, - PAGE_KERNEL_EXEC); + create_mapping(start, __phys_to_virt(start), end - start, PAGE_KERNEL); } -#endif struct bootstrap_pgtables { pte_t pte[PTRS_PER_PTE]; @@ -429,7 +392,7 @@ static void __init bootstrap_linear_mapping(unsigned long va_offset) #endif create_mapping(__pa(vstart - va_offset), vstart, vend - vstart, - PAGE_KERNEL_EXEC); + PAGE_KERNEL); /* * Temporarily limit the memblock range. We need to do this as