diff mbox series

seqlock: mark raw_read_seqcount and read_seqcount_retry as __always_inline

Message ID 20190603091008.24776-1-anders.roxell@linaro.org
State Superseded
Headers show
Series seqlock: mark raw_read_seqcount and read_seqcount_retry as __always_inline | expand

Commit Message

Anders Roxell June 3, 2019, 9:10 a.m. UTC
If CONFIG_FUNCTION_GRAPH_TRACER is enabled function sched_clock() in
kernel/time/sched_clock.c is marked as notrace. However, functions
raw_read_seqcount and read_seqcount_retry are marked as inline. If
CONFIG_OPTIMIZE_INLINING is set that will make the two functions
tracable which they shouldn't.

Rework so that functions raw_read_seqcount and read_seqcount_retry are
marked with __always_inline so they will be inlined even if
CONFIG_OPTIMIZE_INLINING is turned on.

Signed-off-by: Anders Roxell <anders.roxell@linaro.org>

---
 include/linux/seqlock.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

-- 
2.20.1

Comments

Will Deacon June 5, 2019, 11:58 a.m. UTC | #1
On Mon, Jun 03, 2019 at 11:10:08AM +0200, Anders Roxell wrote:
> If CONFIG_FUNCTION_GRAPH_TRACER is enabled function sched_clock() in

> kernel/time/sched_clock.c is marked as notrace. However, functions

> raw_read_seqcount and read_seqcount_retry are marked as inline. If

> CONFIG_OPTIMIZE_INLINING is set that will make the two functions

> tracable which they shouldn't.


Might be nice to elaborate a bit here on what goes from specifically for
seqlocks. I assume something ends up going recursive thanks to the tracing
code?

With that:

Acked-by: Will Deacon <will.deacon@arm.com>


Will
diff mbox series

Patch

diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index bcf4cf26b8c8..1b18e3df186e 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -127,7 +127,7 @@  static inline unsigned __read_seqcount_begin(const seqcount_t *s)
  * seqcount without any lockdep checking and without checking or
  * masking the LSB. Calling code is responsible for handling that.
  */
-static inline unsigned raw_read_seqcount(const seqcount_t *s)
+static __always_inline unsigned raw_read_seqcount(const seqcount_t *s)
 {
 	unsigned ret = READ_ONCE(s->sequence);
 	smp_rmb();
@@ -215,7 +215,8 @@  static inline int __read_seqcount_retry(const seqcount_t *s, unsigned start)
  * If the critical section was invalid, it must be ignored (and typically
  * retried).
  */
-static inline int read_seqcount_retry(const seqcount_t *s, unsigned start)
+static
+__always_inline int read_seqcount_retry(const seqcount_t *s, unsigned start)
 {
 	smp_rmb();
 	return __read_seqcount_retry(s, start);