From patchwork Fri Jun 1 17:12:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 137572 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp1198676lji; Fri, 1 Jun 2018 10:12:26 -0700 (PDT) X-Google-Smtp-Source: ADUXVKK/TQo82pI3rz/CFn12yTGTye9A2KGlUTdIIQefOActU1mq4wgksgdGakblHMFdfroOjiq0 X-Received: by 2002:a62:c987:: with SMTP id l7-v6mr11475248pfk.221.1527873146183; Fri, 01 Jun 2018 10:12:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527873146; cv=none; d=google.com; s=arc-20160816; b=MpeVyOXWnebe/0QgRaN+bx+q5noWP27J1JBOppeZQoSgAdHX2XfiDo2YtEOG/9xQma e7scUsDyljYdR3Yc4ibN7rwnp8hH+cUnQ4u2oT01SQggk9B9lVEUZyZro5HMG2hsrT7K 3gIsvhO8ZAmURdj9KNZSh3R/V5HFxzk1R+MmHulPOTpiCiR/eAu0RqSDiPDrwQwIwIhd aBUa0+a4v/NZfyoW7brUnnetGLZCkc31f+eZepLCyw2i39poUTZG0iteNbglcqoEcCi/ 1+zjpngiXeqhV3EMYuMl4OGkgBL3Oizxeeg7PAWM+mW1/KUW+uEGeo6ODYV7YYH7wRv8 sewA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=HIW4rJzZzWYE1CYVt38UEYzn+W9r46d4IqC5EmdM2gY=; b=Y5ohquqY8BTnqx30+av2XmIZjneRAnqmqnq+hJM1hIH/aeIfXDOsLLtvxOJilzAxvW bDprczofXfPaiPVdBxsGi69Q+jpm43wkE5ZvKCyOID0RCFcL+9P8hU0FEwP/9U9WtYPv S2Vsc/TmVJwjRs0E2hMTrgkQm/Fev8G1k8iZ6df/aFv+4LSReOaB0IvqRAVH6uKl/iIu lrcbJCDkho5n5Q3RzgXoOjy1EcQekharjlnqoKE3lehBQi/PF2kG0U4qcna+gKVBhbaW FpP6nH5paz2kJpLfy+VaTbPXeuL3tGckPr8EVqYvhVy4Pts+ErHaz98fyFHfkn2qu+Vy hOng== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id bd4-v6si21210085plb.338.2018.06.01.10.12.25; Fri, 01 Jun 2018 10:12:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753185AbeFARMW (ORCPT + 13 others); Fri, 1 Jun 2018 13:12:22 -0400 Received: from foss.arm.com ([217.140.101.70]:56098 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752262AbeFARMV (ORCPT ); Fri, 1 Jun 2018 13:12:21 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D946C1529; Fri, 1 Jun 2018 10:12:20 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1E6483F53D; Fri, 1 Jun 2018 10:12:19 -0700 (PDT) From: Mark Rutland To: stable@vger.kernel.org Cc: marc.zyngier@arm.com, mark.rutland@arm.com Subject: [PATCH v4.9.y 2/2] arm64/cpufeature: don't use mutex in bringup path Date: Fri, 1 Jun 2018 18:12:10 +0100 Message-Id: <20180601171210.16870-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180601171210.16870-1-mark.rutland@arm.com> References: <20180601171210.16870-1-mark.rutland@arm.com> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org commit 63a1e1c95e60e798fa09ab3c536fb555aa5bbf2b upstream. Currently, cpus_set_cap() calls static_branch_enable_cpuslocked(), which must take the jump_label mutex. We call cpus_set_cap() in the secondary bringup path, from the idle thread where interrupts are disabled. Taking a mutex in this path "is a NONO" regardless of whether it's contended, and something we must avoid. We didn't spot this until recently, as ___might_sleep() won't warn for this case until all CPUs have been brought up. This patch avoids taking the mutex in the secondary bringup path. The poking of static keys is deferred until enable_cpu_capabilities(), which runs in a suitable context on the boot CPU. To account for the static keys being set later, cpus_have_const_cap() is updated to use another static key to check whether the const cap keys have been initialised, falling back to the caps bitmap until this is the case. This means that users of cpus_have_const_cap() gain should only gain a single additional NOP in the fast path once the const caps are initialised, but should always see the current cap value. The hyp code should never dereference the caps array, since the caps are initialized before we run the module initcall to initialise hyp. A check is added to the hyp init code to document this requirement. This change will sidestep a number of issues when the upcoming hotplug locking rework is merged. Signed-off-by: Mark Rutland Reviewed-by: Marc Zyniger Reviewed-by: Suzuki Poulose Acked-by: Will Deacon Cc: Christoffer Dall Cc: Peter Zijlstra Cc: Sebastian Sewior Cc: Thomas Gleixner Signed-off-by: Catalin Marinas [4.9: this avoids an IPI before GICv3 is up, preventing a boot time crash] Signed-off-by: Mark Rutland [v4.9 backport] --- arch/arm64/include/asm/cpufeature.h | 12 ++++++++++-- arch/arm64/include/asm/kvm_host.h | 8 ++++++-- arch/arm64/kernel/cpufeature.c | 23 +++++++++++++++++++++-- 3 files changed, 37 insertions(+), 6 deletions(-) -- 2.11.0 diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 9890d20f2356..4ea85ebdf4df 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -96,6 +96,7 @@ struct arm64_cpu_capabilities { extern DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS); extern struct static_key_false cpu_hwcap_keys[ARM64_NCAPS]; +extern struct static_key_false arm64_const_caps_ready; bool this_cpu_has_cap(unsigned int cap); @@ -105,7 +106,7 @@ static inline bool cpu_have_feature(unsigned int num) } /* System capability check for constant caps */ -static inline bool cpus_have_const_cap(int num) +static inline bool __cpus_have_const_cap(int num) { if (num >= ARM64_NCAPS) return false; @@ -119,6 +120,14 @@ static inline bool cpus_have_cap(unsigned int num) return test_bit(num, cpu_hwcaps); } +static inline bool cpus_have_const_cap(int num) +{ + if (static_branch_likely(&arm64_const_caps_ready)) + return __cpus_have_const_cap(num); + else + return cpus_have_cap(num); +} + static inline void cpus_set_cap(unsigned int num) { if (num >= ARM64_NCAPS) { @@ -126,7 +135,6 @@ static inline void cpus_set_cap(unsigned int num) num, ARM64_NCAPS); } else { __set_bit(num, cpu_hwcaps); - static_branch_enable(&cpu_hwcap_keys[num]); } } diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 7f402f64c744..2abb4493f4f6 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -24,6 +24,7 @@ #include #include +#include #include #include #include @@ -358,9 +359,12 @@ static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr, unsigned long vector_ptr) { /* - * Call initialization code, and switch to the full blown - * HYP code. + * Call initialization code, and switch to the full blown HYP code. + * If the cpucaps haven't been finalized yet, something has gone very + * wrong, and hyp will crash and burn when it uses any + * cpus_have_const_cap() wrapper. */ + BUG_ON(!static_branch_likely(&arm64_const_caps_ready)); __kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr); } diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index e28665411bd1..7959d2c92010 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1052,8 +1052,16 @@ void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps, */ void __init enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps) { - for (; caps->matches; caps++) - if (caps->enable && cpus_have_cap(caps->capability)) + for (; caps->matches; caps++) { + unsigned int num = caps->capability; + + if (!cpus_have_cap(num)) + continue; + + /* Ensure cpus_have_const_cap(num) works */ + static_branch_enable(&cpu_hwcap_keys[num]); + + if (caps->enable) { /* * Use stop_machine() as it schedules the work allowing * us to modify PSTATE, instead of on_each_cpu() which @@ -1061,6 +1069,8 @@ void __init enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps) * we return. */ stop_machine(caps->enable, (void *)caps, cpu_online_mask); + } + } } /* @@ -1164,6 +1174,14 @@ static void __init setup_feature_capabilities(void) enable_cpu_capabilities(arm64_features); } +DEFINE_STATIC_KEY_FALSE(arm64_const_caps_ready); +EXPORT_SYMBOL(arm64_const_caps_ready); + +static void __init mark_const_caps_ready(void) +{ + static_branch_enable(&arm64_const_caps_ready); +} + extern const struct arm64_cpu_capabilities arm64_errata[]; bool this_cpu_has_cap(unsigned int cap) @@ -1180,6 +1198,7 @@ void __init setup_cpu_features(void) /* Set the CPU feature capabilies */ setup_feature_capabilities(); enable_errata_workarounds(); + mark_const_caps_ready(); setup_elf_hwcaps(arm64_elf_hwcaps); if (system_supports_32bit_el0())