mbox series

[v3,00/17] timer: Move from a push remote at enqueue to a pull at expiry model

Message ID 20221025135850.51044-1-anna-maria@linutronix.de
Headers show
Series timer: Move from a push remote at enqueue to a pull at expiry model | expand

Message

Anna-Maria Behnsen Oct. 25, 2022, 1:58 p.m. UTC
Placing timers at enqueue time on a target CPU based on dubious heuristics
does not make any sense:

 1) Most timer wheel timers are canceled or rearmed before they expire.

 2) The heuristics to predict which CPU will be busy when the timer expires
    are wrong by definition.

So placing the timers at enqueue wastes precious cycles.

The proper solution to this problem is to always queue the timers on the
local CPU and allow the non pinned timers to be pulled onto a busy CPU at
expiry time.

Therefore split the timer storage into local pinned and global timers:
Local pinned timers are always expired on the CPU on which they have been
queued. Global timers can be expired on any CPU.

As long as a CPU is busy it expires both local and global timers. When a
CPU goes idle it arms for the first expiring local timer. If the first
expiring pinned (local) timer is before the first expiring movable timer,
then no action is required because the CPU will wake up before the first
movable timer expires. If the first expiring movable timer is before the
first expiring pinned (local) timer, then this timer is queued into a idle
timerqueue and eventually expired by some other active CPU.

To avoid global locking the timerqueues are implemented as a hierarchy. The
lowest level of the hierarchy holds the CPUs. The CPUs are associated to
groups of 8, which are seperated per node. If more than one CPU group
exist, then a second level in the hierarchy collects the groups. Depending
on the size of the system more than 2 levels are required. Each group has a
"migrator" which checks the timerqueue during the tick for remote expirable
timers.

If the last CPU in a group goes idle it reports the first expiring event in
the group up to the next group(s) in the hierarchy. If the last CPU goes
idle it arms its timer for the first system wide expiring timer to ensure
that no timer event is missed.


Testing
~~~~~~~

The impact of wasting cycles during enqueue by using the heuristic in
contrast to always queueing the timer on the local CPU was measured with a
micro benchmark. Therefore a timer is enqueued and dequeued in a loop with
1000 repetitions on a isolated CPU. The time the loop takes is measured. A
quater of the remaining CPUs was kept busy. This measurement was repeated
several times. With the patch queue the average duration was reduced by
approximately 25%.

	145ns	plain v6
	109ns	v6 with patch queue


Furthermore the impact of residence in deep idle states of an idle system
was investigated. The patch queue doesn't downgrade this behavior.


During testing on a mostly idle machine a ping pong game could be observed:
a process_timeout timer is expired remotely on a non idle CPU. Then the CPU
where the schedule_timeout() was executed to enqueue the timer comes out of
idle and restarts the timer using schedule_timeout() and goes back to idle
again. This is due to the fair scheduler which tries to keep the task on
the CPU which it previously executed on.


Next Steps
~~~~~~~~~~

Deferrable timers are no longer required as they can be converted to global
timers. If a CPU goes idle, a formerly deferrable timer will not prevent
the CPU to sleep as long as possible. Only the last migrator CPU has to
take care of them. This conversation will be done in a follow up series.


Changes vs. v2: https://lore.kernel.org/r/20170418111102.490432548@linutronix.de/
  - Minimize usage of locks by storing data using atomic_cmpxchg() for
    migrator information and information about active cpus.


Thanks,

	Anna-Maria


Anna-Maria Behnsen (14):
  cpufreq: Prepare timer flags for hierarchical timer pull model
  tick-sched: Warn when next tick seems to be in the past
  timer: Move store of next event into __next_timer_interrupt()
  timer: Split next timer interrupt logic
  timer: Keep the pinned timers separate from the others
  timer: Retrieve next expiry of pinned/non-pinned timers seperately
  timer: Rename get_next_timer_interrupt()
  timer: Split out "get next timer interrupt" functionality
  timer: Add get next timer interrupt functionality for remote CPUs
  timer: Check if timers base is handled already
  timer: Implement the hierarchical pull model
  timer_migration: Add tracepoints
  add_timer_on(): Make sure callers have TIMER_PINNED flag
  timer: Always queue timers on the local CPU

Richard Cochran (linutronix GmbH) (2):
  timer: Restructure internal locking
  tick/sched: Split out jiffies update helper function

Thomas Gleixner (1):
  timer: Rework idle logic

 arch/x86/kernel/tsc_sync.c             |    3 +-
 drivers/cpufreq/powernv-cpufreq.c      |   15 +-
 include/linux/cpuhotplug.h             |    1 +
 include/trace/events/timer_migration.h |  277 ++++++
 kernel/time/Makefile                   |    3 +
 kernel/time/clocksource.c              |    2 +-
 kernel/time/tick-internal.h            |   12 +-
 kernel/time/tick-sched.c               |   50 +-
 kernel/time/timer.c                    |  360 +++++--
 kernel/time/timer_migration.c          | 1263 ++++++++++++++++++++++++
 kernel/time/timer_migration.h          |  121 +++
 kernel/workqueue.c                     |    7 +-
 12 files changed, 2011 insertions(+), 103 deletions(-)
 create mode 100644 include/trace/events/timer_migration.h
 create mode 100644 kernel/time/timer_migration.c
 create mode 100644 kernel/time/timer_migration.h