From patchwork Tue Aug 4 07:44:06 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: AKASHI Takahiro X-Patchwork-Id: 51900 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f69.google.com (mail-la0-f69.google.com [209.85.215.69]) by patches.linaro.org (Postfix) with ESMTPS id F00FD229FD for ; Tue, 4 Aug 2015 07:45:03 +0000 (UTC) Received: by labby2 with SMTP id by2sf815324lab.3 for ; Tue, 04 Aug 2015 00:45:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=cACw7oDPOr3qEfmsKt2LYZ+V4T2kzyJTJgUxrsmQySk=; b=l4Pcosca7kGfvcwd+ODSPuOhuF5rh91f8fr68yVx5vXEjloGW8vL/1LOB8zhhZ61e9 Bg2qvDAuyr5S99ui2odrrtakYTIHgBYegTNa81Uvi+YOeiGf8klRhylxAnh5zhjl+bHl TF5kqZztOqzC+HnWZ9t+kCxaW3U6j0en+RHCEyyESaECnFsSt1WJzmNN2u61JKU0NP0F xMe7uTynWDHCOraMao8G/Sd4kKG7wyFFd8PB2ty6UDoZJtT8CpGM1PA5lXldKE3+56o3 lcSm2WjMPj8e8NCu7JTIrfeqw2oz/UspOh9j0qHHFGT9b9Ipwedroc1q7EEt2EeGtU+N h7Fg== X-Gm-Message-State: ALoCoQnfJVG17Vln/Wg4eesKQHXxFu+gcid0TgIUuosn0LmF8PdybSjfRQ6G6m9c125T3ofB3Ki8 X-Received: by 10.112.125.71 with SMTP id mo7mr719911lbb.2.1438674302596; Tue, 04 Aug 2015 00:45:02 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.27.234 with SMTP id w10ls9722lag.41.gmail; Tue, 04 Aug 2015 00:45:02 -0700 (PDT) X-Received: by 10.152.23.167 with SMTP id n7mr1894347laf.108.1438674302277; Tue, 04 Aug 2015 00:45:02 -0700 (PDT) Received: from mail-lb0-f182.google.com (mail-lb0-f182.google.com. [209.85.217.182]) by mx.google.com with ESMTPS id l9si112481laf.134.2015.08.04.00.45.02 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 04 Aug 2015 00:45:02 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.182 as permitted sender) client-ip=209.85.217.182; Received: by lbbpo9 with SMTP id po9so1084997lbb.2 for ; Tue, 04 Aug 2015 00:45:02 -0700 (PDT) X-Received: by 10.152.36.102 with SMTP id p6mr1887412laj.19.1438674302158; Tue, 04 Aug 2015 00:45:02 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.7.198 with SMTP id l6csp2266773lba; Tue, 4 Aug 2015 00:45:01 -0700 (PDT) X-Received: by 10.70.55.134 with SMTP id s6mr4849125pdp.137.1438674300619; Tue, 04 Aug 2015 00:45:00 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ou7si328462pbc.219.2015.08.04.00.44.59; Tue, 04 Aug 2015 00:45:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755218AbbHDHoy (ORCPT + 28 others); Tue, 4 Aug 2015 03:44:54 -0400 Received: from mail-pa0-f44.google.com ([209.85.220.44]:35624 "EHLO mail-pa0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755101AbbHDHov (ORCPT ); Tue, 4 Aug 2015 03:44:51 -0400 Received: by pasy3 with SMTP id y3so1971400pas.2 for ; Tue, 04 Aug 2015 00:44:51 -0700 (PDT) X-Received: by 10.66.231.69 with SMTP id te5mr5010437pac.98.1438674291043; Tue, 04 Aug 2015 00:44:51 -0700 (PDT) Received: from localhost.localdomain (61-205-6-81m5.grp1.mineo.jp. [61.205.6.81]) by smtp.googlemail.com with ESMTPSA id bd5sm287869pbb.85.2015.08.04.00.44.46 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 04 Aug 2015 00:44:50 -0700 (PDT) From: AKASHI Takahiro To: catalin.marinas@arm.com, will.deacon@arm.com, rostedt@goodmis.org Cc: jungseoklee85@gmail.com, olof@lixom.net, broonie@kernel.org, david.griego@linaro.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, AKASHI Takahiro Subject: [RFC v2 1/4] ftrace: allow arch-specific check_stack() Date: Tue, 4 Aug 2015 16:44:06 +0900 Message-Id: <1438674249-3447-2-git-send-email-takahiro.akashi@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1438674249-3447-1-git-send-email-takahiro.akashi@linaro.org> References: <1438674249-3447-1-git-send-email-takahiro.akashi@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: takahiro.akashi@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.182 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , A stack frame pointer may be used in a different way depending on cpu architecture. Thus it is not always appropriate to slurp the stack contents, as currently done in check_stack(), in order to calcurate a stack index (height) at a given function call. At least not on arm64. This patch extract potentially arch-specific code from check_stack() and puts it into a new arch_check_stack(), which is declared as weak. So we will be able to add arch-specific and most efficient way of stack traversing Later. Signed-off-by: AKASHI Takahiro --- include/linux/stacktrace.h | 4 ++ kernel/trace/trace_stack.c | 88 ++++++++++++++++++++++++++------------------ 2 files changed, 56 insertions(+), 36 deletions(-) diff --git a/include/linux/stacktrace.h b/include/linux/stacktrace.h index 0a34489..bfae605 100644 --- a/include/linux/stacktrace.h +++ b/include/linux/stacktrace.h @@ -10,6 +10,10 @@ struct pt_regs; struct stack_trace { unsigned int nr_entries, max_entries; unsigned long *entries; +#ifdef CONFIG_STACK_TRACER + unsigned *index; + unsigned long *sp; +#endif int skip; /* input argument: How many entries to skip */ }; diff --git a/kernel/trace/trace_stack.c b/kernel/trace/trace_stack.c index 3d9356b..021b8c3 100644 --- a/kernel/trace/trace_stack.c +++ b/kernel/trace/trace_stack.c @@ -27,9 +27,10 @@ static unsigned stack_dump_index[STACK_TRACE_ENTRIES]; * us to remove most or all of the stack size overhead * added by the stack tracer itself. */ -static struct stack_trace max_stack_trace = { + struct stack_trace max_stack_trace = { .max_entries = STACK_TRACE_ENTRIES - 1, .entries = &stack_dump_trace[0], + .index = &stack_dump_index[0], }; static unsigned long max_stack_size; @@ -65,42 +66,15 @@ static inline void print_max_stack(void) } } -static inline void -check_stack(unsigned long ip, unsigned long *stack) +void __weak +arch_check_stack(unsigned long ip, unsigned long *stack, + unsigned long *max_size, unsigned int *tracer_size) { - unsigned long this_size, flags; unsigned long *p, *top, *start; - static int tracer_frame; - int frame_size = ACCESS_ONCE(tracer_frame); + unsigned long *p, *top, *start; + unsigned long this_size; int i, x; - this_size = ((unsigned long)stack) & (THREAD_SIZE-1); - this_size = THREAD_SIZE - this_size; - /* Remove the frame of the tracer */ - this_size -= frame_size; - - if (this_size <= max_stack_size) - return; - - /* we do not handle interrupt stacks yet */ - if (!object_is_on_stack(stack)) - return; - - local_irq_save(flags); - arch_spin_lock(&max_stack_lock); - - /* In case another CPU set the tracer_frame on us */ - if (unlikely(!frame_size)) - this_size -= tracer_frame; - - /* a race could have already updated it */ - if (this_size <= max_stack_size) - goto out; - - max_stack_size = this_size; - - max_stack_trace.nr_entries = 0; max_stack_trace.skip = 3; - save_stack_trace(&max_stack_trace); /* Skip over the overhead of the stack tracer itself */ @@ -116,6 +90,7 @@ check_stack(unsigned long ip, unsigned long *stack) start = stack; top = (unsigned long *) (((unsigned long)start & ~(THREAD_SIZE-1)) + THREAD_SIZE); + this_size = *max_size; /* * Loop through all the entries. One of the entries may @@ -146,10 +121,10 @@ check_stack(unsigned long ip, unsigned long *stack) * out what that is, then figure it out * now. */ - if (unlikely(!tracer_frame)) { - tracer_frame = (p - stack) * + if (unlikely(!*tracer_size)) { + *tracer_size = (p - stack) * sizeof(unsigned long); - max_stack_size -= tracer_frame; + *max_size -= *tracer_size; } } } @@ -161,6 +136,47 @@ check_stack(unsigned long ip, unsigned long *stack) max_stack_trace.nr_entries = x; for (; x < i; x++) stack_dump_trace[x] = ULONG_MAX; +} + +static inline void +check_stack(unsigned long ip, unsigned long *stack) +{ + unsigned long this_size, flags; + static int tracer_frame; + int frame_size = ACCESS_ONCE(tracer_frame); + + this_size = ((unsigned long)stack) & (THREAD_SIZE-1); + this_size = THREAD_SIZE - this_size; + /* for safety, depending on arch_check_stack() */ + if (this_size < frame_size) + return; + + /* Remove the frame of the tracer */ + this_size -= frame_size; + + if (this_size <= max_stack_size) + return; + + /* we do not handle interrupt stacks yet */ + if (!object_is_on_stack(stack)) + return; + + local_irq_save(flags); + arch_spin_lock(&max_stack_lock); + + /* In case another CPU set the tracer_frame on us */ + if (unlikely(!frame_size)) + this_size -= tracer_frame; + + /* a race could have already updated it */ + if (this_size <= max_stack_size) + goto out; + + max_stack_size = this_size; + + max_stack_trace.nr_entries = 0; + + arch_check_stack(ip, stack, &max_stack_size, &tracer_frame); if (task_stack_end_corrupted(current)) { print_max_stack();