From patchwork Fri Aug 19 16:13:10 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Thompson X-Patchwork-Id: 74288 Delivered-To: patches@linaro.org Received: by 10.140.29.52 with SMTP id a49csp391596qga; Fri, 19 Aug 2016 09:13:27 -0700 (PDT) X-Received: by 10.28.185.202 with SMTP id j193mr4574797wmf.78.1471623205949; Fri, 19 Aug 2016 09:13:25 -0700 (PDT) Return-Path: Received: from mail-wm0-x235.google.com (mail-wm0-x235.google.com. [2a00:1450:400c:c09::235]) by mx.google.com with ESMTPS id t206si4597539wmb.142.2016.08.19.09.13.25 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 19 Aug 2016 09:13:25 -0700 (PDT) Received-SPF: pass (google.com: domain of daniel.thompson@linaro.org designates 2a00:1450:400c:c09::235 as permitted sender) client-ip=2a00:1450:400c:c09::235; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: domain of daniel.thompson@linaro.org designates 2a00:1450:400c:c09::235 as permitted sender) smtp.mailfrom=daniel.thompson@linaro.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: by mail-wm0-x235.google.com with SMTP id i5so48363126wmg.0 for ; Fri, 19 Aug 2016 09:13:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=jdCmPoxVHXdNN8Tu1hjUIjpx/bCsptn1bvf/k6e/SNE=; b=VIA0EXh93tNkgqJ3tPBz0fza5KZz45o45wz0WIXBG2gzSMQ/d8X9YWlzKyzwLnpLhD NdlIGBSHPEqd1M42qblAF2CTIl4VwQ9JqkZ63pYDeJZ72HU53wWPA8edJvPcfIJy+1+y EEMWY3Lwgf3VgpDo0WhZ8rSikvyHJlzkyM+fc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=jdCmPoxVHXdNN8Tu1hjUIjpx/bCsptn1bvf/k6e/SNE=; b=eG9eACMIMpe31qMWvhfyxDAhomsmxjV+XfWnhVQzDBEwfGBsQoDHEaVN6blvK15apd FMBkh2FM8ytzzazoKD6T8szjKq5BTig6AJZh6wwO+iDvYW7djxsRfDg0GEBxQDfgh0NO tKLuNKpAlvQj0ayHlGBZm1dDLyukibFx+sbLuNe9it3pFf2HNaajxOwhDktdyYcYZSTd jDNWNMO5GfwjR6+4epXT1u0qCeQPGBT8J//Dq2CQyKTHv1tyFyKLjSziFiEkBVdU/lnR s1qRfP2c4a0lfklaaviN9v6zXUw7WicMoYw3aQL/7nMZVLSx6CQN7yooC4ORChmNbs2/ M/qg== X-Gm-Message-State: AEkoousvSXcg7zRk3z1qjH0smrALnyUOED5vu05TRXgjr/1qLQXRtHUMXuxxHtOEuKyk4rVQB9w= X-Received: by 10.28.63.21 with SMTP id m21mr4449521wma.77.1471623205516; Fri, 19 Aug 2016 09:13:25 -0700 (PDT) Return-Path: Received: from wychelm.lan (cpc4-aztw19-0-0-cust71.18-1.cable.virginm.net. [82.33.25.72]) by smtp.gmail.com with ESMTPSA id ub8sm7712636wjc.39.2016.08.19.09.13.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 19 Aug 2016 09:13:25 -0700 (PDT) From: Daniel Thompson To: linux-arm-kernel@lists.infradead.org Cc: Daniel Thompson , Catalin Marinas , Will Deacon , linux-kernel@vger.kernel.org, patches@linaro.org, linaro-kernel@lists.linaro.org, John Stultz , Sumit Semwal , Marc Zyngier , Dave Martin , Russell King Subject: [RFC PATCH v3 2/7] arm64: Add support for on-demand backtrace of other CPUs Date: Fri, 19 Aug 2016 17:13:10 +0100 Message-Id: <1471623195-7829-3-git-send-email-daniel.thompson@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1471623195-7829-1-git-send-email-daniel.thompson@linaro.org> References: <1471623195-7829-1-git-send-email-daniel.thompson@linaro.org> Currently arm64 has no implementation of arch_trigger_all_cpu_backtace. The patch provides one using library code recently added by Russell King for for the majority of the implementation. Currently this is realized using regular irqs but could, in the future, be implemented using NMI-like mechanisms. Note: There is a small (and nasty) change to the generic code to ensure good stack traces. The generic code currently assumes that show_regs() will include a stack trace but arch/arm64 does not do this so we must add extra code here. Ideas on a better approach here would be very welcome (is there any appetite to change arm64 show_regs() or should we just tease out the dump code into a callback?). Signed-off-by: Daniel Thompson Cc: Russell King --- arch/arm64/include/asm/hardirq.h | 2 +- arch/arm64/include/asm/irq.h | 3 +++ arch/arm64/kernel/smp.c | 30 +++++++++++++++++++++++++++++- lib/nmi_backtrace.c | 9 +++++++-- 4 files changed, 40 insertions(+), 4 deletions(-) -- 2.7.4 diff --git a/arch/arm64/include/asm/hardirq.h b/arch/arm64/include/asm/hardirq.h index 8740297dac77..1473fc2f7ab7 100644 --- a/arch/arm64/include/asm/hardirq.h +++ b/arch/arm64/include/asm/hardirq.h @@ -20,7 +20,7 @@ #include #include -#define NR_IPI 6 +#define NR_IPI 7 typedef struct { unsigned int __softirq_pending; diff --git a/arch/arm64/include/asm/irq.h b/arch/arm64/include/asm/irq.h index b77197d941fc..67dc130ae517 100644 --- a/arch/arm64/include/asm/irq.h +++ b/arch/arm64/include/asm/irq.h @@ -56,5 +56,8 @@ static inline bool on_irq_stack(unsigned long sp, int cpu) return (low <= sp && sp <= high); } +extern void arch_trigger_all_cpu_backtrace(bool); +#define arch_trigger_all_cpu_backtrace(x) arch_trigger_all_cpu_backtrace(x) + #endif /* !__ASSEMBLER__ */ #endif diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index d93d43352504..99f607f0fa97 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -37,6 +37,7 @@ #include #include #include +#include #include #include @@ -73,7 +74,8 @@ enum ipi_msg_type { IPI_CPU_STOP, IPI_TIMER, IPI_IRQ_WORK, - IPI_WAKEUP + IPI_WAKEUP, + IPI_CPU_BACKTRACE, }; #ifdef CONFIG_ARM64_VHE @@ -737,6 +739,7 @@ static const char *ipi_types[NR_IPI] __tracepoint_string = { S(IPI_TIMER, "Timer broadcast interrupts"), S(IPI_IRQ_WORK, "IRQ work interrupts"), S(IPI_WAKEUP, "CPU wake-up interrupts"), + S(IPI_CPU_BACKTRACE, "backtrace interrupts"), }; static void smp_cross_call(const struct cpumask *target, unsigned int ipinr) @@ -862,6 +865,14 @@ void handle_IPI(int ipinr, struct pt_regs *regs) break; #endif + case IPI_CPU_BACKTRACE: + printk_nmi_enter(); + irq_enter(); + nmi_cpu_backtrace(regs); + irq_exit(); + printk_nmi_exit(); + break; + default: pr_crit("CPU%u: Unknown IPI message 0x%x\n", cpu, ipinr); break; @@ -935,3 +946,20 @@ bool cpus_are_stuck_in_kernel(void) return !!cpus_stuck_in_kernel || smp_spin_tables; } + +static void raise_nmi(cpumask_t *mask) +{ + /* + * Generate the backtrace directly if we are running in a + * calling context that is not preemptible by the backtrace IPI. + */ + if (cpumask_test_cpu(smp_processor_id(), mask) && irqs_disabled()) + nmi_cpu_backtrace(NULL); + + smp_cross_call(mask, IPI_CPU_BACKTRACE); +} + +void arch_trigger_all_cpu_backtrace(bool include_self) +{ + nmi_trigger_all_cpu_backtrace(include_self, raise_nmi); +} diff --git a/lib/nmi_backtrace.c b/lib/nmi_backtrace.c index 26caf51cc238..3dada8487477 100644 --- a/lib/nmi_backtrace.c +++ b/lib/nmi_backtrace.c @@ -78,10 +78,15 @@ bool nmi_cpu_backtrace(struct pt_regs *regs) if (cpumask_test_cpu(cpu, to_cpumask(backtrace_mask))) { pr_warn("NMI backtrace for cpu %d\n", cpu); - if (regs) + if (regs) { show_regs(regs); - else +#ifdef CONFIG_ARM64 + show_stack(NULL, NULL); +#endif + } else { dump_stack(); + } + cpumask_clear_cpu(cpu, to_cpumask(backtrace_mask)); return true; }