Message ID | 20190611154751.10923-1-anders.roxell@linaro.org |
---|---|
State | New |
Headers | show |
Series | [v2] seqlock: mark raw_read_seqcount and read_seqcount_retry as __always_inline | expand |
On Tue, 11 Jun 2019, Anders Roxell wrote: > With the function graph tracer, each traced function calls sched_clock() > to take a timestamp. As sched_clock() uses > raw_read_seqcount()/read_seqcount_retry(), we must ensure that these > do not in turn trigger the graph tracer. > Both functions is marked as inline. However, if CONFIG_OPTIMIZE_INLINING > is set that may make the two functions tracable which they shouldn't. > > Rework so that functions raw_read_seqcount and read_seqcount_retry are > marked with __always_inline so they will be inlined even if > CONFIG_OPTIMIZE_INLINING is turned on. Why just those two? The same issue can happen in other places with other clocks which can be utilized by the tracer. Aside of your particular issue, there is no reason why any of those functions should ever trigger a graph. Thanks, tglx
On Sun, 23 Jun 2019 at 13:16, Thomas Gleixner <tglx@linutronix.de> wrote: > > On Tue, 11 Jun 2019, Anders Roxell wrote: > > > With the function graph tracer, each traced function calls sched_clock() > > to take a timestamp. As sched_clock() uses > > raw_read_seqcount()/read_seqcount_retry(), we must ensure that these > > do not in turn trigger the graph tracer. > > Both functions is marked as inline. However, if CONFIG_OPTIMIZE_INLINING > > is set that may make the two functions tracable which they shouldn't. > > > > Rework so that functions raw_read_seqcount and read_seqcount_retry are > > marked with __always_inline so they will be inlined even if > > CONFIG_OPTIMIZE_INLINING is turned on. > > Why just those two? The same issue can happen in other places with other > clocks which can be utilized by the tracer. > > Aside of your particular issue, there is no reason why any of those > functions should ever trigger a graph. Yes you are correct. I'll update the patch with __always_inline to all functions in that file. Cheers, Anders
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index bcf4cf26b8c8..1b18e3df186e 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -127,7 +127,7 @@ static inline unsigned __read_seqcount_begin(const seqcount_t *s) * seqcount without any lockdep checking and without checking or * masking the LSB. Calling code is responsible for handling that. */ -static inline unsigned raw_read_seqcount(const seqcount_t *s) +static __always_inline unsigned raw_read_seqcount(const seqcount_t *s) { unsigned ret = READ_ONCE(s->sequence); smp_rmb(); @@ -215,7 +215,8 @@ static inline int __read_seqcount_retry(const seqcount_t *s, unsigned start) * If the critical section was invalid, it must be ignored (and typically * retried). */ -static inline int read_seqcount_retry(const seqcount_t *s, unsigned start) +static +__always_inline int read_seqcount_retry(const seqcount_t *s, unsigned start) { smp_rmb(); return __read_seqcount_retry(s, start);