diff mbox series

[v2] seqlock: mark raw_read_seqcount and read_seqcount_retry as __always_inline

Message ID 20190611154751.10923-1-anders.roxell@linaro.org
State New
Headers show
Series [v2] seqlock: mark raw_read_seqcount and read_seqcount_retry as __always_inline | expand

Commit Message

Anders Roxell June 11, 2019, 3:47 p.m. UTC
With the function graph tracer, each traced function calls sched_clock()
to take a timestamp. As sched_clock() uses
raw_read_seqcount()/read_seqcount_retry(), we must ensure that these
do not in turn trigger the graph tracer.
Both functions is marked as inline. However, if CONFIG_OPTIMIZE_INLINING
is set that may make the two functions tracable which they shouldn't.

Rework so that functions raw_read_seqcount and read_seqcount_retry are
marked with __always_inline so they will be inlined even if
CONFIG_OPTIMIZE_INLINING is turned on.

Acked-by: Will Deacon <will.deacon@arm.com>

Signed-off-by: Anders Roxell <anders.roxell@linaro.org>

---
 include/linux/seqlock.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

-- 
2.20.1

Comments

Thomas Gleixner June 23, 2019, 11:16 a.m. UTC | #1
On Tue, 11 Jun 2019, Anders Roxell wrote:

> With the function graph tracer, each traced function calls sched_clock()

> to take a timestamp. As sched_clock() uses

> raw_read_seqcount()/read_seqcount_retry(), we must ensure that these

> do not in turn trigger the graph tracer.

> Both functions is marked as inline. However, if CONFIG_OPTIMIZE_INLINING

> is set that may make the two functions tracable which they shouldn't.

> 

> Rework so that functions raw_read_seqcount and read_seqcount_retry are

> marked with __always_inline so they will be inlined even if

> CONFIG_OPTIMIZE_INLINING is turned on.


Why just those two? The same issue can happen in other places with other
clocks which can be utilized by the tracer.

Aside of your particular issue, there is no reason why any of those
functions should ever trigger a graph.

Thanks,

	tglx
Anders Roxell July 26, 2019, 10:01 a.m. UTC | #2
On Sun, 23 Jun 2019 at 13:16, Thomas Gleixner <tglx@linutronix.de> wrote:
>

> On Tue, 11 Jun 2019, Anders Roxell wrote:

>

> > With the function graph tracer, each traced function calls sched_clock()

> > to take a timestamp. As sched_clock() uses

> > raw_read_seqcount()/read_seqcount_retry(), we must ensure that these

> > do not in turn trigger the graph tracer.

> > Both functions is marked as inline. However, if CONFIG_OPTIMIZE_INLINING

> > is set that may make the two functions tracable which they shouldn't.

> >

> > Rework so that functions raw_read_seqcount and read_seqcount_retry are

> > marked with __always_inline so they will be inlined even if

> > CONFIG_OPTIMIZE_INLINING is turned on.

>

> Why just those two? The same issue can happen in other places with other

> clocks which can be utilized by the tracer.

>

> Aside of your particular issue, there is no reason why any of those

> functions should ever trigger a graph.



Yes you are correct. I'll update the patch with __always_inline to all functions
in that file.

Cheers,
Anders
diff mbox series

Patch

diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index bcf4cf26b8c8..1b18e3df186e 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -127,7 +127,7 @@  static inline unsigned __read_seqcount_begin(const seqcount_t *s)
  * seqcount without any lockdep checking and without checking or
  * masking the LSB. Calling code is responsible for handling that.
  */
-static inline unsigned raw_read_seqcount(const seqcount_t *s)
+static __always_inline unsigned raw_read_seqcount(const seqcount_t *s)
 {
 	unsigned ret = READ_ONCE(s->sequence);
 	smp_rmb();
@@ -215,7 +215,8 @@  static inline int __read_seqcount_retry(const seqcount_t *s, unsigned start)
  * If the critical section was invalid, it must be ignored (and typically
  * retried).
  */
-static inline int read_seqcount_retry(const seqcount_t *s, unsigned start)
+static
+__always_inline int read_seqcount_retry(const seqcount_t *s, unsigned start)
 {
 	smp_rmb();
 	return __read_seqcount_retry(s, start);