From patchwork Fri Aug 19 16:13:12 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Thompson X-Patchwork-Id: 74290 Delivered-To: patches@linaro.org Received: by 10.140.29.52 with SMTP id a49csp391629qga; Fri, 19 Aug 2016 09:13:32 -0700 (PDT) X-Received: by 10.194.59.77 with SMTP id x13mr6883330wjq.144.1471623208159; Fri, 19 Aug 2016 09:13:28 -0700 (PDT) Return-Path: Received: from mail-wm0-x231.google.com (mail-wm0-x231.google.com. [2a00:1450:400c:c09::231]) by mx.google.com with ESMTPS id cn10si6926551wjb.185.2016.08.19.09.13.28 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 19 Aug 2016 09:13:28 -0700 (PDT) Received-SPF: pass (google.com: domain of daniel.thompson@linaro.org designates 2a00:1450:400c:c09::231 as permitted sender) client-ip=2a00:1450:400c:c09::231; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: domain of daniel.thompson@linaro.org designates 2a00:1450:400c:c09::231 as permitted sender) smtp.mailfrom=daniel.thompson@linaro.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: by mail-wm0-x231.google.com with SMTP id f65so40335725wmi.0 for ; Fri, 19 Aug 2016 09:13:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=OyPbbbneYqzEo3NfWyqApd4cdPoPsKw33/0Jo+HtA8w=; b=Er200SqMWx0KmTSCpNB//YtOgPHYS3aYk7+xJ6lh0I7t1Hz4EV953rhHhPM9WbiyzT ruBppc6AO4FOGuH+wgM+yr45gi4GKwaSVABNdmWXlBdF3RVDypTIpW3/wB1oBQTsYGa3 watTVCpQ1vDDQotxz4DJkE/sGbysPMthXU7dw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=OyPbbbneYqzEo3NfWyqApd4cdPoPsKw33/0Jo+HtA8w=; b=BAUsTlLvuthVi0hNmh9M9dPTtRy/cvP+dYSc659YVDlP5XtbDnRvRFLEMuiKGxW3xl 84X7x5MTzV5Hqne1e7Jh0KPAAlQiS2QNAr8cE6+81OPJIN0Wcdra96SMKDrOlUs0HBal dCq9VU54Xeex2X2v0Uxi22ngQjjfmGmlkJmi+1mx1KbkQCgWSjr+kKEgI2wJRjFuGukd 9xam20EmvPzRq4Iw9Rj4CSUanOe0nWMZj6JQ85SJxkgrp2oaxUA5At0gatHIt7lNZG2v BJxshTfKrPqHabUAPApSY3yyOhNIH+dvCIsGBMMj5q67RHpjnt/YG0SLbVZMl6kW0Hc5 qBSg== X-Gm-Message-State: AEkoouvKDx9l60GBh5A11pQW048W8OY4DT9bYaicSaT8Ph9wzsCsijWgkiunnKxCeJI7z/+JL2g= X-Received: by 10.194.61.205 with SMTP id s13mr7176649wjr.86.1471623207709; Fri, 19 Aug 2016 09:13:27 -0700 (PDT) Return-Path: Received: from wychelm.lan (cpc4-aztw19-0-0-cust71.18-1.cable.virginm.net. [82.33.25.72]) by smtp.gmail.com with ESMTPSA id ub8sm7712636wjc.39.2016.08.19.09.13.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 19 Aug 2016 09:13:27 -0700 (PDT) From: Daniel Thompson To: linux-arm-kernel@lists.infradead.org Cc: Daniel Thompson , Catalin Marinas , Will Deacon , linux-kernel@vger.kernel.org, patches@linaro.org, linaro-kernel@lists.linaro.org, John Stultz , Sumit Semwal , Marc Zyngier , Dave Martin Subject: [RFC PATCH v3 4/7] arm64: alternative: Apply alternatives early in boot process Date: Fri, 19 Aug 2016 17:13:12 +0100 Message-Id: <1471623195-7829-5-git-send-email-daniel.thompson@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1471623195-7829-1-git-send-email-daniel.thompson@linaro.org> References: <1471623195-7829-1-git-send-email-daniel.thompson@linaro.org> Currently alternatives are applied very late in the boot process (and a long time after we enable scheduling). Some alternative sequences, such as those that alter the way CPU context is stored, must be applied much earlier in the boot sequence. Introduce apply_alternatives_early() to allow some alternatives to be applied immediately after we detect the CPU features of the boot CPU. Signed-off-by: Daniel Thompson --- arch/arm64/include/asm/alternative.h | 1 + arch/arm64/kernel/alternative.c | 36 +++++++++++++++++++++++++++++++++--- arch/arm64/kernel/smp.c | 7 +++++++ 3 files changed, 41 insertions(+), 3 deletions(-) -- 2.7.4 diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h index 8746ff6abd77..2eee073668eb 100644 --- a/arch/arm64/include/asm/alternative.h +++ b/arch/arm64/include/asm/alternative.h @@ -19,6 +19,7 @@ struct alt_instr { u8 alt_len; /* size of new instruction(s), <= orig_len */ }; +void __init apply_alternatives_early(void); void __init apply_alternatives_all(void); void apply_alternatives(void *start, size_t length); diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c index d2ee1b21a10d..9c623b7f69f8 100644 --- a/arch/arm64/kernel/alternative.c +++ b/arch/arm64/kernel/alternative.c @@ -27,6 +27,18 @@ #include #include +/* + * early-apply features can be detected using only the boot CPU (i.e. + * no need to check capability of any secondary CPUs) and, even then, + * should only include features where we must patch the kernel very + * early in the boot process. + * + * Note that the cpufeature logic *must* be made aware of early-apply + * features to ensure they are reported as enabled without waiting + * for other CPUs to boot. + */ +#define EARLY_APPLY_FEATURE_MASK BIT(ARM64_HAS_SYSREG_GIC_CPUIF) + #define __ALT_PTR(a,f) (u32 *)((void *)&(a)->f + (a)->f) #define ALT_ORIG_PTR(a) __ALT_PTR(a, orig_offset) #define ALT_REPL_PTR(a) __ALT_PTR(a, alt_offset) @@ -85,7 +97,7 @@ static u32 get_alt_insn(struct alt_instr *alt, u32 *insnptr, u32 *altinsnptr) return insn; } -static void __apply_alternatives(void *alt_region) +static void __apply_alternatives(void *alt_region, unsigned long feature_mask) { struct alt_instr *alt; struct alt_region *region = alt_region; @@ -95,6 +107,9 @@ static void __apply_alternatives(void *alt_region) u32 insn; int i, nr_inst; + if ((BIT(alt->cpufeature) & feature_mask) == 0) + continue; + if (!cpus_have_cap(alt->cpufeature)) continue; @@ -117,6 +132,21 @@ static void __apply_alternatives(void *alt_region) } /* + * This is called very early in the boot process (directly after we run + * a feature detect on the boot CPU). No need to worry about other CPUs + * here. + */ +void apply_alternatives_early(void) +{ + struct alt_region region = { + .begin = __alt_instructions, + .end = __alt_instructions_end, + }; + + __apply_alternatives(®ion, EARLY_APPLY_FEATURE_MASK); +} + +/* * We might be patching the stop_machine state machine, so implement a * really simple polling protocol here. */ @@ -135,7 +165,7 @@ static int __apply_alternatives_multi_stop(void *unused) isb(); } else { BUG_ON(patched); - __apply_alternatives(®ion); + __apply_alternatives(®ion, ~EARLY_APPLY_FEATURE_MASK); /* Barriers provided by the cache flushing */ WRITE_ONCE(patched, 1); } @@ -156,5 +186,5 @@ void apply_alternatives(void *start, size_t length) .end = start + length, }; - __apply_alternatives(®ion); + __apply_alternatives(®ion, -1); } diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 99f607f0fa97..c49e8874fba8 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -441,6 +441,13 @@ void __init smp_prepare_boot_cpu(void) set_my_cpu_offset(per_cpu_offset(smp_processor_id())); cpuinfo_store_boot_cpu(); save_boot_cpu_run_el(); + + /* + * We now know enough about the boot CPU to apply the + * alternatives that cannot wait until interrupt handling + * and/or scheduling is enabled. + */ + apply_alternatives_early(); } static u64 __init of_get_cpu_mpidr(struct device_node *dn)