From patchwork Thu Oct 24 12:47:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 177433 Delivered-To: patch@linaro.org Received: by 2002:a92:409a:0:0:0:0:0 with SMTP id d26csp2142097ill; Thu, 24 Oct 2019 05:49:09 -0700 (PDT) X-Google-Smtp-Source: APXvYqz7fTIRg3WLKYE6cVePJMc8TxwFGiFeAv7V+Zl+TAkYev620kcSpaHg/kdprgYKewnENTq/ X-Received: by 2002:aa7:d8c7:: with SMTP id k7mr42946792eds.138.1571921349040; Thu, 24 Oct 2019 05:49:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1571921349; cv=none; d=google.com; s=arc-20160816; b=nrMukm3SinZdyAwnotLvfsfD1QVwnTI/iz2B5SECrzRKYgtRfviL3dRcOH5pdXnt4g CK0eKncZYM1w8RnGg4ucKBlxanNETXZ6LrexPRPeYONG/q3NiqpkdRWd/FK2GlQJUOA7 +Xa2BdtSrsBpfjV2FYvop+zUpu+TPd7A2yDQ2pC/EGitEZT0t8ThuGykAFWMtS5pT3qz 05qqqXW1rTgfrRbkJRlmcn49llruamPNAhYIrGtAFrFluWXWuUDFB6wqH3sGL3ZR2It2 kbARd6UEElrtEGbd6+ibgfv7jCHiML8si1wWmbAHGI/nwxkRd3Szs/deJSbSRFHyWE/Q lqGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=/LHw+aanYZvFHvIVjpPpGafZDy931GSJc8C7fbhaCpI=; b=00CqhouUnvnFbaqjfmgwA6ttR2m+AXIdSOaLjgaR9/UDObrphd5FRZw2q96h5Nk1Hx DIsG5TJ3iqiWtDuvtHHUebdVk3jDA5axLgo58sOtoTW1tXJ8Zu8IuTHLq9Mko3MIIzlC LyA02L7BlW57TFc/u4yOVJVwszsgUx7/5Hd7QPN6ybqQgvFQoEvJcpGPwW8vlD8jCsqg GRxaubjDsGRIK25gxfX7d7gMPEmeVkkEDr1tOJUc2Kz3I4F37YJ9S6qWL76Vs8b+iyqD sumdIyFIVermqcdZaz4FLaMTkmS765PIGb9GH/OdLUudOFvHq6sdXTsKe2763nqOX5Eb hVaw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=fD2t7T9X; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f3si12973378eda.251.2019.10.24.05.49.08; Thu, 24 Oct 2019 05:49:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=fD2t7T9X; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729185AbfJXMtI (ORCPT + 14 others); Thu, 24 Oct 2019 08:49:08 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:46701 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727913AbfJXMtI (ORCPT ); Thu, 24 Oct 2019 08:49:08 -0400 Received: by mail-wr1-f65.google.com with SMTP id n15so15144071wrw.13 for ; Thu, 24 Oct 2019 05:49:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/LHw+aanYZvFHvIVjpPpGafZDy931GSJc8C7fbhaCpI=; b=fD2t7T9XnG7A6jmi/q0sRy4ioaIEoNcD3fQ1TRRA2Ym4tMJ2FCNBWNdAOawY/w0NrN 0XZgznzDHBUvqvmEfU3s+gHnvCl4jDufyPYixitSIGEZ7gzfhdt3sDzliEHQrU8YJTf2 OWLCcnv9nWl1lwzip1h3nHaaRTOgTyucl/ZZAbLVqxX0elasvQ0slkyuQKmFUT0vO8HW flJLEi77Zl1arixyOwAU4yyH7nSwiVg8X1tKnfOjNFsVij5tnw+Jj4UAFg+erCuOe9GH rFAJmYcHw2icA/7KSa+1zuGh69ybMjNo/yNoM5n2EEoTa6SR3BNIvH2LLQxOqB7+fPNh +8Vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/LHw+aanYZvFHvIVjpPpGafZDy931GSJc8C7fbhaCpI=; b=qEkAJis8cDnadEUxl4evHWJUfsuz3O5xWvnrJ1g2GJMNk5Xy36DK9qJpC1MayAvdWT VAJyQzs4ud6que22KKQ38A9XJEURyNvYIZZAWTI0Xebdrjoz62Bajy7U1UV7ZjnwNy6m iIE6TLotRoKg5IJcIkkrFlV6eL99iGPor8OyE51atO7KR1hXEwnoUIEheAfyLiEVW8nE xZuDdZW3c/gI/csjsvrdTzvHkOrlmnHYmjcPv/3wdtwiZbe6RrtIdHHQL/RDl0kG7FKw qJ9qNlj/TfmJUilP10HHMwT5oLgxFYGXIShePjftZA1PlC5TvNiIt9UYRqN4ECOobcSU xsPQ== X-Gm-Message-State: APjAAAWFV0LYwEAGyGGYDDK5qDhd+bKOu4ZXxeRNMen8QdCaWKOI42NJ P3P0LoeUakAmCSmFSceVpxce10OCJXAe1Kzh X-Received: by 2002:adf:fd88:: with SMTP id d8mr3620403wrr.200.1571921343051; Thu, 24 Oct 2019 05:49:03 -0700 (PDT) Received: from localhost.localdomain (aaubervilliers-681-1-126-126.w90-88.abo.wanadoo.fr. [90.88.7.126]) by smtp.gmail.com with ESMTPSA id j22sm29111038wrd.41.2019.10.24.05.49.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 05:49:01 -0700 (PDT) From: Ard Biesheuvel To: stable@vger.kernel.org Cc: Ard Biesheuvel , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland , Suzuki K Poulose , Jeremy Linton , Andre Przywara , Alexandru Elisei , Dave Martin , Will Deacon , James Morse , Robin Murphy , Julien Thierry Subject: [PATCH for-stable-4.14 10/48] arm64: capabilities: Update prototype for enable call back Date: Thu, 24 Oct 2019 14:47:55 +0200 Message-Id: <20191024124833.4158-11-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024124833.4158-1-ard.biesheuvel@linaro.org> References: <20191024124833.4158-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Dave Martin [ Upstream commit c0cda3b8ee6b4b6851b2fd8b6db91fd7b0e2524a ] We issue the enable() call back for all CPU hwcaps capabilities available on the system, on all the CPUs. So far we have ignored the argument passed to the call back, which had a prototype to accept a "void *" for use with on_each_cpu() and later with stop_machine(). However, with commit 0a0d111d40fd1 ("arm64: cpufeature: Pass capability structure to ->enable callback"), there are some users of the argument who wants the matching capability struct pointer where there are multiple matching criteria for a single capability. Clean up the declaration of the call back to make it clear. 1) Renamed to cpu_enable(), to imply taking necessary actions on the called CPU for the entry. 2) Pass const pointer to the capability, to allow the call back to check the entry. (e.,g to check if any action is needed on the CPU) 3) We don't care about the result of the call back, turning this to a void. Cc: Will Deacon Cc: Catalin Marinas Cc: Mark Rutland Cc: Andre Przywara Cc: James Morse Acked-by: Robin Murphy Reviewed-by: Julien Thierry Signed-off-by: Dave Martin [suzuki: convert more users, rename call back and drop results] Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/cpufeature.h | 7 ++- arch/arm64/include/asm/processor.h | 5 +- arch/arm64/kernel/cpu_errata.c | 55 +++++++++----------- arch/arm64/kernel/cpufeature.c | 34 +++++++----- arch/arm64/kernel/fpsimd.c | 1 + arch/arm64/kernel/traps.c | 4 +- arch/arm64/mm/fault.c | 3 +- 7 files changed, 60 insertions(+), 49 deletions(-) -- 2.20.1 diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 5048c7a55eef..bff4d95db039 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -96,7 +96,12 @@ struct arm64_cpu_capabilities { u16 capability; int def_scope; /* default scope */ bool (*matches)(const struct arm64_cpu_capabilities *caps, int scope); - int (*enable)(void *); /* Called on all active CPUs */ + /* + * Take the appropriate actions to enable this capability for this CPU. + * For each successfully booted CPU, this method is called for each + * globally detected capability. + */ + void (*cpu_enable)(const struct arm64_cpu_capabilities *cap); union { struct { /* To be used for erratum handling only */ u32 midr_model; diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 91bb97d8bdbf..9b6ac522a71a 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -37,6 +37,7 @@ #include #include +#include #include #include #include @@ -222,8 +223,8 @@ static inline void spin_lock_prefetch(const void *ptr) #endif -int cpu_enable_pan(void *__unused); -int cpu_enable_cache_maint_trap(void *__unused); +void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused); +void cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused); #endif /* __ASSEMBLY__ */ #endif /* __ASM_PROCESSOR_H */ diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 3d6d7fae45de..3c2a68d766a2 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -61,11 +61,11 @@ has_mismatched_cache_type(const struct arm64_cpu_capabilities *entry, (arm64_ftr_reg_ctrel0.sys_val & mask); } -static int cpu_enable_trap_ctr_access(void *__unused) +static void +cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused) { /* Clear SCTLR_EL1.UCT */ config_sctlr_el1(SCTLR_EL1_UCT, 0); - return 0; } #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR @@ -169,25 +169,25 @@ static void call_hvc_arch_workaround_1(void) arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL); } -static int enable_smccc_arch_workaround_1(void *data) +static void +enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry) { - const struct arm64_cpu_capabilities *entry = data; bp_hardening_cb_t cb; void *smccc_start, *smccc_end; struct arm_smccc_res res; if (!entry->matches(entry, SCOPE_LOCAL_CPU)) - return 0; + return; if (psci_ops.smccc_version == SMCCC_VERSION_1_0) - return 0; + return; switch (psci_ops.conduit) { case PSCI_CONDUIT_HVC: arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, ARM_SMCCC_ARCH_WORKAROUND_1, &res); if ((int)res.a0 < 0) - return 0; + return; cb = call_hvc_arch_workaround_1; smccc_start = __smccc_workaround_1_hvc_start; smccc_end = __smccc_workaround_1_hvc_end; @@ -197,19 +197,19 @@ static int enable_smccc_arch_workaround_1(void *data) arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, ARM_SMCCC_ARCH_WORKAROUND_1, &res); if ((int)res.a0 < 0) - return 0; + return; cb = call_smc_arch_workaround_1; smccc_start = __smccc_workaround_1_smc_start; smccc_end = __smccc_workaround_1_smc_end; break; default: - return 0; + return; } install_bp_hardening_cb(entry, cb, smccc_start, smccc_end); - return 0; + return; } static void qcom_link_stack_sanitization(void) @@ -224,15 +224,12 @@ static void qcom_link_stack_sanitization(void) : "=&r" (tmp)); } -static int qcom_enable_link_stack_sanitization(void *data) +static void +qcom_enable_link_stack_sanitization(const struct arm64_cpu_capabilities *entry) { - const struct arm64_cpu_capabilities *entry = data; - install_bp_hardening_cb(entry, qcom_link_stack_sanitization, __qcom_hyp_sanitize_link_stack_start, __qcom_hyp_sanitize_link_stack_end); - - return 0; } #endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */ @@ -431,7 +428,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .desc = "ARM errata 826319, 827319, 824069", .capability = ARM64_WORKAROUND_CLEAN_CACHE, MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x02), - .enable = cpu_enable_cache_maint_trap, + .cpu_enable = cpu_enable_cache_maint_trap, }, #endif #ifdef CONFIG_ARM64_ERRATUM_819472 @@ -440,7 +437,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .desc = "ARM errata 819472", .capability = ARM64_WORKAROUND_CLEAN_CACHE, MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x01), - .enable = cpu_enable_cache_maint_trap, + .cpu_enable = cpu_enable_cache_maint_trap, }, #endif #ifdef CONFIG_ARM64_ERRATUM_832075 @@ -521,14 +518,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .capability = ARM64_MISMATCHED_CACHE_LINE_SIZE, .matches = has_mismatched_cache_type, .def_scope = SCOPE_LOCAL_CPU, - .enable = cpu_enable_trap_ctr_access, + .cpu_enable = cpu_enable_trap_ctr_access, }, { .desc = "Mismatched cache type", .capability = ARM64_MISMATCHED_CACHE_TYPE, .matches = has_mismatched_cache_type, .def_scope = SCOPE_LOCAL_CPU, - .enable = cpu_enable_trap_ctr_access, + .cpu_enable = cpu_enable_trap_ctr_access, }, #ifdef CONFIG_QCOM_FALKOR_ERRATUM_1003 { @@ -567,27 +564,27 @@ const struct arm64_cpu_capabilities arm64_errata[] = { { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, MIDR_ALL_VERSIONS(MIDR_CORTEX_A57), - .enable = enable_smccc_arch_workaround_1, + .cpu_enable = enable_smccc_arch_workaround_1, }, { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, MIDR_ALL_VERSIONS(MIDR_CORTEX_A72), - .enable = enable_smccc_arch_workaround_1, + .cpu_enable = enable_smccc_arch_workaround_1, }, { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, MIDR_ALL_VERSIONS(MIDR_CORTEX_A73), - .enable = enable_smccc_arch_workaround_1, + .cpu_enable = enable_smccc_arch_workaround_1, }, { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, MIDR_ALL_VERSIONS(MIDR_CORTEX_A75), - .enable = enable_smccc_arch_workaround_1, + .cpu_enable = enable_smccc_arch_workaround_1, }, { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1), - .enable = qcom_enable_link_stack_sanitization, + .cpu_enable = qcom_enable_link_stack_sanitization, }, { .capability = ARM64_HARDEN_BP_POST_GUEST_EXIT, @@ -596,7 +593,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR), - .enable = qcom_enable_link_stack_sanitization, + .cpu_enable = qcom_enable_link_stack_sanitization, }, { .capability = ARM64_HARDEN_BP_POST_GUEST_EXIT, @@ -605,12 +602,12 @@ const struct arm64_cpu_capabilities arm64_errata[] = { { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN), - .enable = enable_smccc_arch_workaround_1, + .cpu_enable = enable_smccc_arch_workaround_1, }, { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2), - .enable = enable_smccc_arch_workaround_1, + .cpu_enable = enable_smccc_arch_workaround_1, }, #endif #ifdef CONFIG_ARM64_SSBD @@ -636,8 +633,8 @@ void verify_local_cpu_errata_workarounds(void) for (; caps->matches; caps++) { if (cpus_have_cap(caps->capability)) { - if (caps->enable) - caps->enable((void *)caps); + if (caps->cpu_enable) + caps->cpu_enable(caps); } else if (caps->matches(caps, SCOPE_LOCAL_CPU)) { pr_crit("CPU%d: Requires work around for %s, not detected" " at boot time\n", diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index d3a93e23a87c..17aa34d70771 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -859,7 +859,8 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry, ID_AA64PFR0_CSV3_SHIFT); } -static int kpti_install_ng_mappings(void *__unused) +static void +kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused) { typedef void (kpti_remap_fn)(int, int, phys_addr_t); extern kpti_remap_fn idmap_kpti_install_ng_mappings; @@ -869,7 +870,7 @@ static int kpti_install_ng_mappings(void *__unused) int cpu = smp_processor_id(); if (kpti_applied) - return 0; + return; remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings); @@ -880,7 +881,7 @@ static int kpti_install_ng_mappings(void *__unused) if (!cpu) kpti_applied = true; - return 0; + return; } static int __init parse_kpti(char *str) @@ -897,7 +898,7 @@ static int __init parse_kpti(char *str) early_param("kpti", parse_kpti); #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ -static int cpu_copy_el2regs(void *__unused) +static void cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused) { /* * Copy register values that aren't redirected by hardware. @@ -909,8 +910,6 @@ static int cpu_copy_el2regs(void *__unused) */ if (!alternatives_applied) write_sysreg(read_sysreg(tpidr_el1), tpidr_el2); - - return 0; } static const struct arm64_cpu_capabilities arm64_features[] = { @@ -934,7 +933,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .field_pos = ID_AA64MMFR1_PAN_SHIFT, .sign = FTR_UNSIGNED, .min_field_value = 1, - .enable = cpu_enable_pan, + .cpu_enable = cpu_enable_pan, }, #endif /* CONFIG_ARM64_PAN */ #if defined(CONFIG_AS_LSE) && defined(CONFIG_ARM64_LSE_ATOMICS) @@ -982,7 +981,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .capability = ARM64_HAS_VIRT_HOST_EXTN, .def_scope = SCOPE_SYSTEM, .matches = runs_at_el2, - .enable = cpu_copy_el2regs, + .cpu_enable = cpu_copy_el2regs, }, { .desc = "32-bit EL0 Support", @@ -1006,7 +1005,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .capability = ARM64_UNMAP_KERNEL_AT_EL0, .def_scope = SCOPE_SYSTEM, .matches = unmap_kernel_at_el0, - .enable = kpti_install_ng_mappings, + .cpu_enable = kpti_install_ng_mappings, }, #endif { @@ -1169,6 +1168,14 @@ void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps, } } +static int __enable_cpu_capability(void *arg) +{ + const struct arm64_cpu_capabilities *cap = arg; + + cap->cpu_enable(cap); + return 0; +} + /* * Run through the enabled capabilities and enable() it on all active * CPUs @@ -1184,14 +1191,15 @@ void __init enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps) /* Ensure cpus_have_const_cap(num) works */ static_branch_enable(&cpu_hwcap_keys[num]); - if (caps->enable) { + if (caps->cpu_enable) { /* * Use stop_machine() as it schedules the work allowing * us to modify PSTATE, instead of on_each_cpu() which * uses an IPI, giving us a PSTATE that disappears when * we return. */ - stop_machine(caps->enable, (void *)caps, cpu_online_mask); + stop_machine(__enable_cpu_capability, (void *)caps, + cpu_online_mask); } } } @@ -1249,8 +1257,8 @@ verify_local_cpu_features(const struct arm64_cpu_capabilities *caps_list) smp_processor_id(), caps->desc); cpu_die_early(); } - if (caps->enable) - caps->enable((void *)caps); + if (caps->cpu_enable) + caps->cpu_enable(caps); } } diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 5d547deb6996..f4fdf6420ac5 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -28,6 +28,7 @@ #include #include +#include #include #include diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 74259ae9c7f2..a4e49e947684 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -38,6 +38,7 @@ #include #include +#include #include #include #include @@ -436,10 +437,9 @@ asmlinkage void __exception do_undefinstr(struct pt_regs *regs) force_signal_inject(SIGILL, ILL_ILLOPC, regs, 0); } -int cpu_enable_cache_maint_trap(void *__unused) +void cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused) { config_sctlr_el1(SCTLR_EL1_UCI, 0); - return 0; } #define __user_cache_maint(insn, address, res) \ diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 465b90d7abf2..bf7c285d0c82 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -875,7 +875,7 @@ asmlinkage int __exception do_debug_exception(unsigned long addr_if_watchpoint, NOKPROBE_SYMBOL(do_debug_exception); #ifdef CONFIG_ARM64_PAN -int cpu_enable_pan(void *__unused) +void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused) { /* * We modify PSTATE. This won't work from irq context as the PSTATE @@ -885,6 +885,5 @@ int cpu_enable_pan(void *__unused) config_sctlr_el1(SCTLR_EL1_SPAN, 0); asm(SET_PSTATE_PAN(1)); - return 0; } #endif /* CONFIG_ARM64_PAN */