Message ID | 20171204094642.6901-1-patrick.bellasi@arm.com |
---|---|
State | New |
Headers | show |
Series | tracing: Fix trace_preempt_{on,off} for !CONFIG_DEBUG_PREEMPT builds | expand |
On Mon, 4 Dec 2017 09:46:42 +0000 Patrick Bellasi <patrick.bellasi@arm.com> wrote: > The new preempt enable/disable events introduced by: > > d59158162 tracing: Add support for preempt and irq enable/disable events > > are defined only for CONFIG_DEBUG_PREEMPT kernels when the > CONFIG_PREEMPTIRQ_EVENTS support, introduced by the above patch, is > enabled. > > These events are generated within by trace_preempt_{on,off} calls, > which are part of the the "Preemption-off Latency Tracer" > (CONFIG_PREEMPT_TRACER). > > Currently these calls are (correctly) generated on CONFIG_PREEMPT_TRACER > even when the kernel is !CONFIG_DEBUG_PREEMPT configured. Thus leading > to an undefined reference to the new trace events. > > Let's ensure that we always have an (eventually) empty definition of the > events for CONFIG_PREEMPTIRQ_EVENTS kernels. > > This patch will ensure that the additional preempt enabled/disable > events are generated only when we have both: > CONFIG_PREEMPT_TRACER && CONFIG_DEBUG_PREEMPT > > Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com> Thanks, but you're the third person to send me a fix for this. I have the first fixed applied to my code based off of 4.14-rc1 but that branch breaks my tests. I was waiting for rc2 to come out (which just happened), to rebase on and run my tests. I'll do that now. Thanks, -- Steve
On 04-Dec 06:26, Steven Rostedt wrote: > On Mon, 4 Dec 2017 09:46:42 +0000 > Patrick Bellasi <patrick.bellasi@arm.com> wrote: > > > The new preempt enable/disable events introduced by: > > > > d59158162 tracing: Add support for preempt and irq enable/disable events > > > > are defined only for CONFIG_DEBUG_PREEMPT kernels when the > > CONFIG_PREEMPTIRQ_EVENTS support, introduced by the above patch, is > > enabled. > > > > These events are generated within by trace_preempt_{on,off} calls, > > which are part of the the "Preemption-off Latency Tracer" > > (CONFIG_PREEMPT_TRACER). > > > > Currently these calls are (correctly) generated on CONFIG_PREEMPT_TRACER > > even when the kernel is !CONFIG_DEBUG_PREEMPT configured. Thus leading > > to an undefined reference to the new trace events. > > > > Let's ensure that we always have an (eventually) empty definition of the > > events for CONFIG_PREEMPTIRQ_EVENTS kernels. > > > > This patch will ensure that the additional preempt enabled/disable > > events are generated only when we have both: > > CONFIG_PREEMPT_TRACER && CONFIG_DEBUG_PREEMPT > > > > Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com> > > Thanks, but you're the third person to send me a fix for this. I have > the first fixed applied to my code based off of 4.14-rc1 but that > branch breaks my tests. I was waiting for rc2 to come out (which just > happened), to rebase on and run my tests. I'll do that now. No problems, I did not notice previous fixes. > Thanks, > > -- Steve Cheers Patrick -- #include <best/regards.h> Patrick Bellasi
Hi Patrick, Thank you for the patch! Yet something to improve: [auto build test ERROR on linus/master] [also build test ERROR on v4.15-rc2 next-20171205] [cannot apply to tip/perf/core] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Patrick-Bellasi/tracing-Fix-trace_preempt_-on-off-for-CONFIG_DEBUG_PREEMPT-builds/20171205-174031 config: x86_64-randconfig-u0-12052015 (attached as .config) compiler: gcc-5 (Debian 5.5.0-3) 5.4.1 20171010 reproduce: # save the attached .config to linux build tree make ARCH=x86_64 All errors (new ones prefixed by >>): kernel/softirq.o: In function `__local_bh_disable_ip': >> kernel/softirq.c:132: undefined reference to `trace_preempt_off' kernel/sched/core.o: In function `preempt_latency_stop': >> kernel/sched/core.c:3113: undefined reference to `trace_preempt_on' kernel/sched/core.o: In function `preempt_latency_start': >> kernel/sched/core.c:3080: undefined reference to `trace_preempt_off' >> kernel/sched/core.c:3080: undefined reference to `trace_preempt_off' kernel/sched/core.o: In function `preempt_latency_stop': >> kernel/sched/core.c:3113: undefined reference to `trace_preempt_on' kernel/sched/core.o: In function `preempt_latency_start': >> kernel/sched/core.c:3080: undefined reference to `trace_preempt_off' kernel/sched/core.o: In function `preempt_latency_stop': >> kernel/sched/core.c:3113: undefined reference to `trace_preempt_on' vim +3113 kernel/sched/core.c 265f22a97 kernel/sched/core.c Frederic Weisbecker 2013-05-03 3066 7e49fcce1 kernel/sched.c Steven Rostedt 2009-01-22 3067 #if defined(CONFIG_PREEMPT) && (defined(CONFIG_DEBUG_PREEMPT) || \ 7e49fcce1 kernel/sched.c Steven Rostedt 2009-01-22 3068 defined(CONFIG_PREEMPT_TRACER)) 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3069 /* 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3070 * If the value passed in is equal to the current preempt count 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3071 * then we just disabled preemption. Start timing the latency. 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3072 */ 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3073 static inline void preempt_latency_start(int val) 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3074 { 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3075 if (preempt_count() == val) { 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3076 unsigned long ip = get_lock_parent_ip(); 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3077 #ifdef CONFIG_DEBUG_PREEMPT 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3078 current->preempt_disable_ip = ip; 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3079 #endif 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 @3080 trace_preempt_off(CALLER_ADDR0, ip); 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3081 } 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3082 } 7e49fcce1 kernel/sched.c Steven Rostedt 2009-01-22 3083 edafe3a56 kernel/sched/core.c Masami Hiramatsu 2014-04-17 3084 void preempt_count_add(int val) ^1da177e4 kernel/sched.c Linus Torvalds 2005-04-16 3085 { 6cd8a4bb2 kernel/sched.c Steven Rostedt 2008-05-12 3086 #ifdef CONFIG_DEBUG_PREEMPT ^1da177e4 kernel/sched.c Linus Torvalds 2005-04-16 3087 /* ^1da177e4 kernel/sched.c Linus Torvalds 2005-04-16 3088 * Underflow? ^1da177e4 kernel/sched.c Linus Torvalds 2005-04-16 3089 */ 9a11b49a8 kernel/sched.c Ingo Molnar 2006-07-03 3090 if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0))) 9a11b49a8 kernel/sched.c Ingo Molnar 2006-07-03 3091 return; 6cd8a4bb2 kernel/sched.c Steven Rostedt 2008-05-12 3092 #endif bdb438065 kernel/sched/core.c Peter Zijlstra 2013-09-10 3093 __preempt_count_add(val); 6cd8a4bb2 kernel/sched.c Steven Rostedt 2008-05-12 3094 #ifdef CONFIG_DEBUG_PREEMPT ^1da177e4 kernel/sched.c Linus Torvalds 2005-04-16 3095 /* ^1da177e4 kernel/sched.c Linus Torvalds 2005-04-16 3096 * Spinlock count overflowing soon? ^1da177e4 kernel/sched.c Linus Torvalds 2005-04-16 3097 */ 33859f7f9 kernel/sched.c Miguel Ojeda Sandonis 2006-12-10 3098 DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >= 33859f7f9 kernel/sched.c Miguel Ojeda Sandonis 2006-12-10 3099 PREEMPT_MASK - 10); 6cd8a4bb2 kernel/sched.c Steven Rostedt 2008-05-12 3100 #endif 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3101 preempt_latency_start(val); ^1da177e4 kernel/sched.c Linus Torvalds 2005-04-16 3102 } bdb438065 kernel/sched/core.c Peter Zijlstra 2013-09-10 3103 EXPORT_SYMBOL(preempt_count_add); edafe3a56 kernel/sched/core.c Masami Hiramatsu 2014-04-17 3104 NOKPROBE_SYMBOL(preempt_count_add); ^1da177e4 kernel/sched.c Linus Torvalds 2005-04-16 3105 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3106 /* 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3107 * If the value passed in equals to the current preempt count 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3108 * then we just enabled preemption. Stop timing the latency. 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3109 */ 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3110 static inline void preempt_latency_stop(int val) 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3111 { 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3112 if (preempt_count() == val) 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 @3113 trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip()); 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3114 } 47252cfba kernel/sched/core.c Steven Rostedt 2016-03-21 3115 :::::: The code at line 3113 was first introduced by commit :::::: 47252cfbac03644ee4a3adfa50c77896aa94f2bb sched/core: Add preempt checks in preempt_schedule() code :::::: TO: Steven Rostedt <rostedt@goodmis.org> :::::: CC: Ingo Molnar <mingo@kernel.org> --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation
diff --git a/include/trace/events/preemptirq.h b/include/trace/events/preemptirq.h index f5024c560d8f..500ad0a221d5 100644 --- a/include/trace/events/preemptirq.h +++ b/include/trace/events/preemptirq.h @@ -40,6 +40,11 @@ DEFINE_EVENT(preemptirq_template, irq_disable, DEFINE_EVENT(preemptirq_template, irq_enable, TP_PROTO(unsigned long ip, unsigned long parent_ip), TP_ARGS(ip, parent_ip)); +#else +#define trace_irq_disable(...) +#define trace_irq_enable(...) +#define trace_irq_disable_rcuidle(...) +#define trace_irq_enable_rcuidle(...) #endif #ifdef CONFIG_DEBUG_PREEMPT @@ -50,6 +55,11 @@ DEFINE_EVENT(preemptirq_template, preempt_disable, DEFINE_EVENT(preemptirq_template, preempt_enable, TP_PROTO(unsigned long ip, unsigned long parent_ip), TP_ARGS(ip, parent_ip)); +#else +#define trace_preempt_disable(...) +#define trace_preempt_enable(...) +#define trace_preempt_disable_rcuidle(...) +#define trace_preempt_enable_rcuidle(...) #endif #endif /* _TRACE_PREEMPTIRQ_H */ diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c index 03ecb4465ee4..9d62e5f30d58 100644 --- a/kernel/trace/trace_irqsoff.c +++ b/kernel/trace/trace_irqsoff.c @@ -848,8 +848,7 @@ inline void print_irqtrace_events(struct task_struct *curr) } #endif -#if defined(CONFIG_PREEMPT_TRACER) || \ - (defined(CONFIG_DEBUG_PREEMPT) && defined(CONFIG_PREEMPTIRQ_EVENTS)) +#ifdef CONFIG_PREEMPT_TRACER void trace_preempt_on(unsigned long a0, unsigned long a1) { trace_preempt_enable_rcuidle(a0, a1);