From patchwork Tue Feb 2 04:06:47 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hanjun Guo X-Patchwork-Id: 60992 Delivered-To: patch@linaro.org Received: by 10.55.15.231 with SMTP id 100csp469848qkp; Mon, 1 Feb 2016 20:14:54 -0800 (PST) X-Received: by 10.98.65.206 with SMTP id g75mr19365622pfd.94.1454386494502; Mon, 01 Feb 2016 20:14:54 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f79si50595317pff.71.2016.02.01.20.14.54; Mon, 01 Feb 2016 20:14:54 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751761AbcBBEOx (ORCPT + 3 others); Mon, 1 Feb 2016 23:14:53 -0500 Received: from szxga02-in.huawei.com ([119.145.14.65]:13288 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751656AbcBBEOx (ORCPT ); Mon, 1 Feb 2016 23:14:53 -0500 Received: from 172.24.1.47 (EHLO szxeml430-hub.china.huawei.com) ([172.24.1.47]) by szxrg02-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DAU24729; Tue, 02 Feb 2016 12:07:12 +0800 (CST) Received: from localhost (10.177.17.188) by szxeml430-hub.china.huawei.com (10.82.67.185) with Microsoft SMTP Server id 14.3.235.1; Tue, 2 Feb 2016 12:07:06 +0800 From: Hanjun Guo To: , CC: Subject: [PATCH 3/3] arm64: mm: ensure patched kernel text is fetched from PoU Date: Tue, 2 Feb 2016 12:06:47 +0800 Message-ID: <1454386007-11860-4-git-send-email-guohanjun@huawei.com> X-Mailer: git-send-email 1.7.10.msysgit.1 In-Reply-To: <1454386007-11860-1-git-send-email-guohanjun@huawei.com> References: <1454386007-11860-1-git-send-email-guohanjun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.17.188] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020206.56B02B73.008A, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: ba5522e79f5b061af55daec975a64a5b Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Will Deacon The arm64 booting document requires that the bootloader has cleaned the kernel image to the PoC. However, when a CPU re-enters the kernel due to either a CPU hotplug "on" event or resuming from a low-power state (e.g. cpuidle), the kernel text may in-fact be dirty at the PoU due to things like alternative patching or even module loading. Thanks to I-cache speculation with the MMU off, stale instructions could be fetched prior to enabling the MMU, potentially leading to crashes when executing regions of code that have been modified at runtime. This patch addresses the issue by ensuring that the local I-cache is invalidated immediately after a CPU has enabled its MMU but before jumping out of the identity mapping. Any stale instructions fetched from the PoC will then be discarded and refetched correctly from the PoU. Patching kernel text executed prior to the MMU being enabled is prohibited, so the early entry code will always be clean. Reviewed-by: Mark Rutland Tested-by: Mark Rutland Signed-off-by: Will Deacon Signed-off-by: Hanjun Guo --- arch/arm64/kernel/head.S | 8 ++++++++ arch/arm64/kernel/sleep.S | 8 ++++++++ arch/arm64/mm/proc.S | 1 - 3 files changed, 16 insertions(+), 1 deletion(-) -- 1.9.1 -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 36aa31f..af6e4e8 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -682,5 +682,13 @@ __enable_mmu: isb msr sctlr_el1, x0 isb + /* + * Invalidate the local I-cache so that any instructions fetched + * speculatively from the PoC are discarded, since they may have + * been dynamically patched at the PoU. + */ + ic iallu + dsb nsh + isb br x27 ENDPROC(__enable_mmu) diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S index ede186c..1c6969b 100644 --- a/arch/arm64/kernel/sleep.S +++ b/arch/arm64/kernel/sleep.S @@ -134,6 +134,14 @@ ENTRY(cpu_resume_mmu) ldr x3, =cpu_resume_after_mmu msr sctlr_el1, x0 // restore sctlr_el1 isb + /* + * Invalidate the local I-cache so that any instructions fetched + * speculatively from the PoC are discarded, since they may have + * been dynamically patched at the PoU. + */ + ic iallu + dsb nsh + isb br x3 // global jump to virtual address ENDPROC(cpu_resume_mmu) cpu_resume_after_mmu: diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index cdd754e..ee18bbc 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -196,7 +196,6 @@ ENDPROC(cpu_do_switch_mm) * value of the SCTLR_EL1 register. */ ENTRY(__cpu_setup) - ic iallu // I+BTB cache invalidate tlbi vmalle1is // invalidate I + D TLBs dsb ish