From patchwork Fri Oct 30 05:25:39 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: AKASHI Takahiro X-Patchwork-Id: 55818 Delivered-To: patch@linaro.org Received: by 10.112.61.134 with SMTP id p6csp965730lbr; Thu, 29 Oct 2015 22:27:47 -0700 (PDT) X-Received: by 10.107.150.196 with SMTP id y187mr1477835iod.13.1446182867570; Thu, 29 Oct 2015 22:27:47 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id rt3si943312igb.71.2015.10.29.22.27.47; Thu, 29 Oct 2015 22:27:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dkim=neutral (body hash did not verify) header.i=@linaro_org.20150623.gappssmtp.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758549AbbJ3F1p (ORCPT + 28 others); Fri, 30 Oct 2015 01:27:45 -0400 Received: from mail-pa0-f47.google.com ([209.85.220.47]:35651 "EHLO mail-pa0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758525AbbJ3F1n (ORCPT ); Fri, 30 Oct 2015 01:27:43 -0400 Received: by pasz6 with SMTP id z6so62934223pas.2 for ; Thu, 29 Oct 2015 22:27:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro_org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=zcAM0uTilmtZk+3LVWDc2szTIV1CmZylQBjdIgQQtOM=; b=ikBLPNBs/0nEd4HAO9a+BhtQb0wgN2rI+cR81F+8ZFfZy/JFp2EYBq8HHtBl6qe5H7 k1Jh3EOVPYZgKM2vXPzPsoTH2wHhMjQJNpRyO4dqMtnBtleEcXaNDD0Y8aVz9qBmvvXD 3MNgNe73Hcr0E8woT0B0RvhzbFl9DppL9HqahnkRC/iOONmDXN2rT6J026d3uF1kXc10 ZCTiOKOg3XHfgbcSNiHt9pY+ellHejN16Z1PTn0KCbXf8MThwFuJatUMltDsCw3dQgkW 8Hszs+h+qD2kdGccWWfzhvEO5/oHruf95GWqF27Hjh6OuHmKBianv5FkfF7lCi7bdKVS ijgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=zcAM0uTilmtZk+3LVWDc2szTIV1CmZylQBjdIgQQtOM=; b=CQcJ6FkTmEXEkwG8Xz68GgV0S5T1vChExMPaUguEzBk8RMrbpmMfzaiPa8T9tMfSvv hsSZlDZ/aBCy4wPkwmSdIyq8V5cR1gCS+3JQgowqLIwZ9VnvQVYJexqKIWbVB9QNAX4a nduHdlUB6x7y0IAys4NZvg1XSLdGUtAcVPrusQw+uX43aQEc7MRY9VVepKGi3TeZ4wV4 wFZnfU9VwSZqDMIK10i+1rVYFFqgcPQoWbXQi3yQXKyv5aZP5VVt8H+mJo4tQqngRbNQ AcsN/9mjFdy0TkxuKCoVXPRrmE8ktmO4rjDPoTT3/jNosufliHDywxWqUwE/q08ty0uY gXCQ== X-Gm-Message-State: ALoCoQnv03f9KvtdposVIlhezDnP48LkTFFOUDB6XkM0/iXTlZwpgGAAVHkrFWC6o9lRq06bKwlv X-Received: by 10.68.218.132 with SMTP id pg4mr6365769pbc.129.1446182862749; Thu, 29 Oct 2015 22:27:42 -0700 (PDT) Received: from localhost.localdomain (61-205-6-54m5.grp1.mineo.jp. [61.205.6.54]) by smtp.googlemail.com with ESMTPSA id ho3sm5571377pbb.18.2015.10.29.22.27.37 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 29 Oct 2015 22:27:41 -0700 (PDT) From: AKASHI Takahiro To: catalin.marinas@arm.com, will.deacon@arm.com, rostedt@goodmis.org Cc: jungseoklee85@gmail.com, olof@lixom.net, broonie@kernel.org, david.griego@linaro.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, AKASHI Takahiro Subject: [PATCH v4 4/6] ftrace: allow arch-specific stack tracer Date: Fri, 30 Oct 2015 14:25:39 +0900 Message-Id: <1446182741-31019-5-git-send-email-takahiro.akashi@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1446182741-31019-1-git-send-email-takahiro.akashi@linaro.org> References: <1446182741-31019-1-git-send-email-takahiro.akashi@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org A stack frame may be used in a different way depending on cpu architecture. Thus it is not always appropriate to slurp the stack contents, as current check_stack() does, in order to calcurate a stack index (height) at a given function call. At least not on arm64. In addition, there is a possibility that we will mistakenly detect a stale stack frame which has not been overwritten. This patch makes check_stack() a weak function so as to later implement arch-specific version. Signed-off-by: AKASHI Takahiro --- include/linux/ftrace.h | 10 ++++++ kernel/trace/trace_stack.c | 80 ++++++++++++++++++++++++-------------------- 2 files changed, 53 insertions(+), 37 deletions(-) -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index d77b195..c304650 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -270,7 +270,17 @@ static inline void ftrace_kill(void) { } #define FTRACE_STACK_FRAME_OFFSET 0 #endif +#define STACK_TRACE_ENTRIES 500 + +struct stack_trace; + +extern unsigned stack_trace_index[]; +extern struct stack_trace stack_trace_max; +extern unsigned long stack_trace_max_size; +extern arch_spinlock_t max_stack_lock; + extern int stack_tracer_enabled; +void stack_trace_print(void); int stack_trace_sysctl(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, diff --git a/kernel/trace/trace_stack.c b/kernel/trace/trace_stack.c index 2e452e8..40f3368 100644 --- a/kernel/trace/trace_stack.c +++ b/kernel/trace/trace_stack.c @@ -16,24 +16,22 @@ #include "trace.h" -#define STACK_TRACE_ENTRIES 500 - static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES+1] = { [0 ... (STACK_TRACE_ENTRIES)] = ULONG_MAX }; -static unsigned stack_dump_index[STACK_TRACE_ENTRIES]; +unsigned stack_trace_index[STACK_TRACE_ENTRIES]; /* * Reserve one entry for the passed in ip. This will allow * us to remove most or all of the stack size overhead * added by the stack tracer itself. */ -static struct stack_trace max_stack_trace = { +struct stack_trace stack_trace_max = { .max_entries = STACK_TRACE_ENTRIES - 1, .entries = &stack_dump_trace[0], }; -static unsigned long max_stack_size; -static arch_spinlock_t max_stack_lock = +unsigned long stack_trace_max_size; +arch_spinlock_t max_stack_lock = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; static DEFINE_PER_CPU(int, trace_active); @@ -42,30 +40,38 @@ static DEFINE_MUTEX(stack_sysctl_mutex); int stack_tracer_enabled; static int last_stack_tracer_enabled; -static inline void print_max_stack(void) +void stack_trace_print(void) { long i; int size; pr_emerg(" Depth Size Location (%d entries)\n" " ----- ---- --------\n", - max_stack_trace.nr_entries); + stack_trace_max.nr_entries); - for (i = 0; i < max_stack_trace.nr_entries; i++) { + for (i = 0; i < stack_trace_max.nr_entries; i++) { if (stack_dump_trace[i] == ULONG_MAX) break; - if (i+1 == max_stack_trace.nr_entries || + if (i+1 == stack_trace_max.nr_entries || stack_dump_trace[i+1] == ULONG_MAX) - size = stack_dump_index[i]; + size = stack_trace_index[i]; else - size = stack_dump_index[i] - stack_dump_index[i+1]; + size = stack_trace_index[i] - stack_trace_index[i+1]; - pr_emerg("%3ld) %8d %5d %pS\n", i, stack_dump_index[i], + pr_emerg("%3ld) %8d %5d %pS\n", i, stack_trace_index[i], size, (void *)stack_dump_trace[i]); } } -static inline void +/* + * When arch-specific code overides this function, the following + * data should be filled up, assuming max_stack_lock is held to + * prevent concurrent updates. + * stack_trace_index[] + * stack_trace_max + * stack_trace_max_size + */ +void __weak check_stack(unsigned long ip, unsigned long *stack) { unsigned long this_size, flags; unsigned long *p, *top, *start, addr; @@ -78,7 +84,7 @@ check_stack(unsigned long ip, unsigned long *stack) /* Remove the frame of the tracer */ this_size -= frame_size; - if (this_size <= max_stack_size) + if (this_size <= stack_trace_max_size) return; /* we do not handle interrupt stacks yet */ @@ -103,18 +109,18 @@ check_stack(unsigned long ip, unsigned long *stack) this_size -= tracer_frame; /* a race could have already updated it */ - if (this_size <= max_stack_size) + if (this_size <= stack_trace_max_size) goto out; - max_stack_size = this_size; + stack_trace_max_size = this_size; - max_stack_trace.nr_entries = 0; - max_stack_trace.skip = 3; + stack_trace_max.nr_entries = 0; + stack_trace_max.skip = 3; - save_stack_trace(&max_stack_trace); + save_stack_trace(&stack_trace_max); /* Skip over the overhead of the stack tracer itself */ - for (i = 0; i < max_stack_trace.nr_entries; i++) { + for (i = 0; i < stack_trace_max.nr_entries; i++) { addr = stack_dump_trace[i] + FTRACE_STACK_FRAME_OFFSET; if (addr == ip) break; @@ -135,19 +141,19 @@ check_stack(unsigned long ip, unsigned long *stack) * loop will only happen once. This code only takes place * on a new max, so it is far from a fast path. */ - while (i < max_stack_trace.nr_entries) { + while (i < stack_trace_max.nr_entries) { int found = 0; - stack_dump_index[x] = this_size; + stack_trace_index[x] = this_size; p = start; - for (; p < top && i < max_stack_trace.nr_entries; p++) { + for (; p < top && i < stack_trace_max.nr_entries; p++) { if (stack_dump_trace[i] == ULONG_MAX) break; addr = stack_dump_trace[i] + FTRACE_STACK_FRAME_OFFSET; if (*p == addr) { stack_dump_trace[x] = stack_dump_trace[i++]; - this_size = stack_dump_index[x++] = + this_size = stack_trace_index[x++] = (top - p) * sizeof(unsigned long); found = 1; /* Start the search from here */ @@ -162,7 +168,7 @@ check_stack(unsigned long ip, unsigned long *stack) if (unlikely(!tracer_frame)) { tracer_frame = (p - stack) * sizeof(unsigned long); - max_stack_size -= tracer_frame; + stack_trace_max_size -= tracer_frame; } } } @@ -171,12 +177,12 @@ check_stack(unsigned long ip, unsigned long *stack) i++; } - max_stack_trace.nr_entries = x; + stack_trace_max.nr_entries = x; for (; x < i; x++) stack_dump_trace[x] = ULONG_MAX; if (task_stack_end_corrupted(current)) { - print_max_stack(); + stack_trace_print(); BUG(); } @@ -275,7 +281,7 @@ __next(struct seq_file *m, loff_t *pos) { long n = *pos - 1; - if (n > max_stack_trace.nr_entries || stack_dump_trace[n] == ULONG_MAX) + if (n > stack_trace_max.nr_entries || stack_dump_trace[n] == ULONG_MAX) return NULL; m->private = (void *)n; @@ -345,9 +351,9 @@ static int t_show(struct seq_file *m, void *v) seq_printf(m, " Depth Size Location" " (%d entries)\n" " ----- ---- --------\n", - max_stack_trace.nr_entries); + stack_trace_max.nr_entries); - if (!stack_tracer_enabled && !max_stack_size) + if (!stack_tracer_enabled && !stack_trace_max_size) print_disabled(m); return 0; @@ -355,17 +361,17 @@ static int t_show(struct seq_file *m, void *v) i = *(long *)v; - if (i >= max_stack_trace.nr_entries || + if (i >= stack_trace_max.nr_entries || stack_dump_trace[i] == ULONG_MAX) return 0; - if (i+1 == max_stack_trace.nr_entries || + if (i+1 == stack_trace_max.nr_entries || stack_dump_trace[i+1] == ULONG_MAX) - size = stack_dump_index[i]; + size = stack_trace_index[i]; else - size = stack_dump_index[i] - stack_dump_index[i+1]; + size = stack_trace_index[i] - stack_trace_index[i+1]; - seq_printf(m, "%3ld) %8d %5d ", i, stack_dump_index[i], size); + seq_printf(m, "%3ld) %8d %5d ", i, stack_trace_index[i], size); trace_lookup_stack(m, i); @@ -455,7 +461,7 @@ static __init int stack_trace_init(void) return 0; trace_create_file("stack_max_size", 0644, d_tracer, - &max_stack_size, &stack_max_size_fops); + &stack_trace_max_size, &stack_max_size_fops); trace_create_file("stack_trace", 0444, d_tracer, NULL, &stack_trace_fops);