From patchwork Mon Apr 24 14:01:32 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Lezcano X-Patchwork-Id: 98047 Delivered-To: patch@linaro.org Received: by 10.182.246.10 with SMTP id xs10csp275650obc; Mon, 24 Apr 2017 07:03:51 -0700 (PDT) X-Received: by 10.98.103.1 with SMTP id b1mr25360447pfc.184.1493042631702; Mon, 24 Apr 2017 07:03:51 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n187si19128843pfn.185.2017.04.24.07.03.51; Mon, 24 Apr 2017 07:03:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1172000AbdDXODp (ORCPT + 15 others); Mon, 24 Apr 2017 10:03:45 -0400 Received: from mail-wm0-f41.google.com ([74.125.82.41]:38061 "EHLO mail-wm0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1171947AbdDXODK (ORCPT ); Mon, 24 Apr 2017 10:03:10 -0400 Received: by mail-wm0-f41.google.com with SMTP id r190so68858792wme.1 for ; Mon, 24 Apr 2017 07:03:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=O+ZOhnN+v2dDHtM1ldZCyPrFWI0uBDWYnIjDYjGoXAE=; b=fGbYbq9J++YmAHC7nrYgWfeBDo793pR+3TlQnfNF/cug9wUIQiCUTTYcrobrWlTkf3 l2aX2yaxe9E4i/ldS4ZZ6f6U4Ag7iAxecUq+er/4fvy9wjKDpgLs7oMOf6FYIzjLHc0g ABGLzZd0x7PjM11DnK7Vp5hw72ayVhrDMAo64= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=O+ZOhnN+v2dDHtM1ldZCyPrFWI0uBDWYnIjDYjGoXAE=; b=BEhZWYLEBzE7GdFdtbAQtvCDuUcW6c7FWohIYXXQg1LkS996TRcmZBQzanncEkPUK/ s0GVYUAJw1Z4yfjwMtsp+///pd7LpkRmcjixYu9t2u/XvXUeClzMb4AUwY/10ruRC8aI 468OhBQ0884mZfLp4nNKikBdwZ3ymkonmetnDHdxpw3HZGgJJ7WrzclGbcwnA/vOYFtF rqQh20jBrvFdy0/2uhrdTcVutIS5+PR0/kML7BZRrzzo0DuD8Zuku3jiEe1bUMq4zaM/ KLCDIDhpQve2ZVcX2cYtJUhMv+wTD72aVjB7kyK/MKcLTnuZkCk7itD/uoo0d4WPPKaM ImWg== X-Gm-Message-State: AN3rC/4ZE0HvSG4I5dhyiBWYNj1fTjN0MbwvRBa5k9qMCAz++AuEnDT1 8jzyXqlo3iPaZP+KZbmO2w== X-Received: by 10.28.145.203 with SMTP id t194mr9527000wmd.16.1493042588944; Mon, 24 Apr 2017 07:03:08 -0700 (PDT) Received: from localhost.localdomain (lft31-1-88-121-166-205.fbx.proxad.net. [88.121.166.205]) by smtp.gmail.com with ESMTPSA id s137sm408565wme.31.2017.04.24.07.03.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 24 Apr 2017 07:03:08 -0700 (PDT) From: Daniel Lezcano To: tglx@linutronix.de Cc: Peter Zijlstra , "Rafael J. Wysocki" , Vincent Guittot , Nicolas Pitre , linux-kernel@vger.kernel.org Subject: [PATCH V9 2/3] irq: Track the interrupt timings Date: Mon, 24 Apr 2017 16:01:32 +0200 Message-Id: <1493042494-14057-2-git-send-email-daniel.lezcano@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1493042494-14057-1-git-send-email-daniel.lezcano@linaro.org> References: <1493042494-14057-1-git-send-email-daniel.lezcano@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The interrupt framework gives a lot of information about each interrupt. It does not keep track of when those interrupts occur though. This patch provides a mean to record the timestamp for each interrupt occurrences in a per-CPU circular buffer to help with the prediction of the next occurrence using a statistical model. Each CPU can store IRQ_TIMINGS_SIZE events , the current value of IRQ_TIMINGS_SIZE is 32. Each event is encoded into a single u64, where the 48 bits are used for the timestamp and the next 16 bits are for the irq number. A static key is introduced so when the irq prediction is switched off at runtime, we can reduce the overhead near to zero. It results in most of the code in internals.h for inline reason and a very few in the new file timings.c. The latter will contain more in the next patch which will provide the statistical model for the next event prediction. Note this code is by default *not* compiled in the kernel. Signed-off-by: Daniel Lezcano Acked-by: Nicolas Pitre Cc: Peter Zijlstra Cc: Rafael J. Wysocki Cc: Vincent Guittot --- V9: - Changed indentation level by inverting the static key condition - Encoded interrupt and timestamp into a u64 variable - Boolean enable instead of refcount for the static key V8: - Replaced percpu field in the irqdesc by a percpu array containing the timings and the associated irq. The function irq_timings_get_next() is no longer needed, so it is removed - Removed all unused code resulting from the conversion irqdesc->percpu timings storage V7: - Mentionned in the irq_timings_get_next() function description, the function must be called inside a rcu read locked section V6: - Renamed handle_irq_timings to record_irq_time - Stored the event time instead of the interval time - Removed the 'timestamp' field from the timings structure - Moved _handle_irq_timings content inside record_irq_time V5: - Changed comment about 'deterministic' as the comment is confusing - Added license comment in the header - Replaced irq_timings_get/put by irq_timings_enable/disable - Moved IRQS_TIMINGS check in the handle_timings inline function - Dropped 'if !prev' as it is pointless - Stored time interval in nsec basis with u64 instead of u32 - Removed redundant store - Removed the math V4: - Added a static key - Added more comments for irq_timings_get_next() - Unified some function names to be prefixed by 'irq_timings_...' - Fixed a rebase error V3: - Replaced ktime_get() by local_clock() - Shared irq are not handled - Simplified code by adding the timing in the irqdesc struct - Added a function to browse the irq timings V2: - Fixed kerneldoc comment - Removed data field from the struct irq timing - Changed the lock section comment - Removed semi-colon style with empty stub - Replaced macro by static inline - Fixed static functions declaration RFC: - initial posting --- include/linux/interrupt.h | 5 +++ kernel/irq/Kconfig | 3 ++ kernel/irq/Makefile | 1 + kernel/irq/handle.c | 2 ++ kernel/irq/internals.h | 84 +++++++++++++++++++++++++++++++++++++++++++++++ kernel/irq/manage.c | 3 ++ kernel/irq/timings.c | 30 +++++++++++++++++ 7 files changed, 128 insertions(+) create mode 100644 kernel/irq/timings.c -- 1.9.1 diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h index 8f44f23..853aef7 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h @@ -712,6 +712,11 @@ static inline void init_irq_proc(void) } #endif +#ifdef CONFIG_IRQ_TIMINGS +void irq_timings_enable(void); +void irq_timings_disable(void); +#endif + struct seq_file; int show_interrupts(struct seq_file *p, void *v); int arch_show_interrupts(struct seq_file *p, int prec); diff --git a/kernel/irq/Kconfig b/kernel/irq/Kconfig index 3bbfd6a..38e551d 100644 --- a/kernel/irq/Kconfig +++ b/kernel/irq/Kconfig @@ -81,6 +81,9 @@ config GENERIC_MSI_IRQ_DOMAIN config HANDLE_DOMAIN_IRQ bool +config IRQ_TIMINGS + bool + config IRQ_DOMAIN_DEBUG bool "Expose hardware/virtual IRQ mapping via debugfs" depends on IRQ_DOMAIN && DEBUG_FS diff --git a/kernel/irq/Makefile b/kernel/irq/Makefile index 1d3ee31..efb5f14 100644 --- a/kernel/irq/Makefile +++ b/kernel/irq/Makefile @@ -10,3 +10,4 @@ obj-$(CONFIG_PM_SLEEP) += pm.o obj-$(CONFIG_GENERIC_MSI_IRQ) += msi.o obj-$(CONFIG_GENERIC_IRQ_IPI) += ipi.o obj-$(CONFIG_SMP) += affinity.o +obj-$(CONFIG_IRQ_TIMINGS) += timings.o diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c index d3f2490..eb4d3e8 100644 --- a/kernel/irq/handle.c +++ b/kernel/irq/handle.c @@ -138,6 +138,8 @@ irqreturn_t __handle_irq_event_percpu(struct irq_desc *desc, unsigned int *flags unsigned int irq = desc->irq_data.irq; struct irqaction *action; + record_irq_time(desc); + for_each_action_of_desc(desc, action) { irqreturn_t res; diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h index bc226e7..df51b5e0 100644 --- a/kernel/irq/internals.h +++ b/kernel/irq/internals.h @@ -8,6 +8,7 @@ #include #include #include +#include #ifdef CONFIG_SPARSE_IRQ # define IRQ_BITMAP_BITS (NR_IRQS + 8196) @@ -57,6 +58,7 @@ enum { IRQS_WAITING = 0x00000080, IRQS_PENDING = 0x00000200, IRQS_SUSPENDED = 0x00000800, + IRQS_TIMINGS = 0x00001000, }; #include "debug.h" @@ -226,3 +228,85 @@ static inline int irq_desc_is_chained(struct irq_desc *desc) static inline void irq_pm_remove_action(struct irq_desc *desc, struct irqaction *action) { } #endif + +#ifdef CONFIG_IRQ_TIMINGS + +#define IRQ_TIMINGS_SHIFT 5 +#define IRQ_TIMINGS_SIZE (1 << IRQ_TIMINGS_SHIFT) +#define IRQ_TIMINGS_MASK (IRQ_TIMINGS_SIZE - 1) + +struct irq_timings { + u64 values[IRQ_TIMINGS_SIZE]; /* our circular buffer */ + unsigned int count; /* Number of interruptions since last inspection */ +}; + +DECLARE_PER_CPU(struct irq_timings, irq_timings); + +static inline void remove_timings(struct irq_desc *desc) +{ + desc->istate &= ~IRQS_TIMINGS; +} + +static inline void setup_timings(struct irq_desc *desc, struct irqaction *act) +{ + /* + * We don't need the measurement because the idle code already + * knows the next expiry event. + */ + if (act->flags & __IRQF_TIMER) + return; + + desc->istate |= IRQS_TIMINGS; +} + +extern void irq_timings_enable(void); +extern void irq_timings_disable(void); + +extern struct static_key_false irq_timing_enabled; + +/* + * The interrupt number and the timestamp are encoded into a single + * u64 variable to optimize the size. + * 48 bit time stamp and 16 bit IRQ number is way sufficient. + * Who cares an IRQ after 78 hours of idle time? + */ +static inline u64 irq_timing_encode(u64 timestamp, int irq) +{ + return (timestamp << 16) | irq; +} + +static inline void irq_timing_decode(u64 value, u64 *timestamp, int *irq) +{ + *timestamp = value >> 16; + *irq = value & U16_MAX; +} + +/* + * The function record_irq_time is only called in one place in the + * interrupts handler. We want this function always inline so the code + * inside is embedded in the function and the static key branching + * code can act at the higher level. Without the explicit + * __always_inline we can end up with a function call and a small + * overhead in the hotpath for nothing. + */ +static __always_inline void record_irq_time(struct irq_desc *desc) +{ + if (!static_branch_likely(&irq_timing_enabled)) + return; + + if (desc->istate & IRQS_TIMINGS) { + struct irq_timings *timings = this_cpu_ptr(&irq_timings); + + timings->values[timings->count & IRQ_TIMINGS_MASK] = + irq_timing_encode(local_clock(), + irq_desc_get_irq(desc)); + + timings->count++; + } +} +#else +static inline void remove_timings(struct irq_desc *desc) {} +static inline void setup_timings(struct irq_desc *desc, + struct irqaction *act) {}; +static inline void record_irq_time(struct irq_desc *desc) {} +#endif /* CONFIG_IRQ_TIMINGS */ diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index 1ba7734..2686845 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -1372,6 +1372,8 @@ static void irq_release_resources(struct irq_desc *desc) raw_spin_unlock_irqrestore(&desc->lock, flags); + setup_timings(desc, new); + /* * Strictly no need to wake it up, but hung_task complains * when no hard interrupt wakes the thread up. @@ -1500,6 +1502,7 @@ static struct irqaction *__free_irq(unsigned int irq, void *dev_id) irq_settings_clr_disable_unlazy(desc); irq_shutdown(desc); irq_release_resources(desc); + remove_timings(desc); } #ifdef CONFIG_SMP diff --git a/kernel/irq/timings.c b/kernel/irq/timings.c new file mode 100644 index 0000000..56cf687 --- /dev/null +++ b/kernel/irq/timings.c @@ -0,0 +1,30 @@ +/* + * linux/kernel/irq/timings.c + * + * Copyright (C) 2016, Linaro Ltd - Daniel Lezcano + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ +#include +#include +#include +#include + +#include "internals.h" + +DEFINE_STATIC_KEY_FALSE(irq_timing_enabled); + +DEFINE_PER_CPU(struct irq_timings, irq_timings); + +void irq_timings_enable(void) +{ + static_branch_enable(&irq_timing_enabled); +} + +void irq_timings_disable(void) +{ + static_branch_disable(&irq_timing_enabled); +} From patchwork Mon Apr 24 14:01:33 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Lezcano X-Patchwork-Id: 98046 Delivered-To: patch@linaro.org Received: by 10.182.246.10 with SMTP id xs10csp275529obc; Mon, 24 Apr 2017 07:03:37 -0700 (PDT) X-Received: by 10.98.103.1 with SMTP id b1mr25358771pfc.184.1493042617083; Mon, 24 Apr 2017 07:03:37 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i190si17623098pge.262.2017.04.24.07.03.36; Mon, 24 Apr 2017 07:03:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1171986AbdDXODe (ORCPT + 15 others); Mon, 24 Apr 2017 10:03:34 -0400 Received: from mail-wm0-f45.google.com ([74.125.82.45]:38082 "EHLO mail-wm0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1171954AbdDXODM (ORCPT ); Mon, 24 Apr 2017 10:03:12 -0400 Received: by mail-wm0-f45.google.com with SMTP id r190so68859758wme.1 for ; Mon, 24 Apr 2017 07:03:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=uuMN7taEyugZgCL3QRokk7VgbTjrrx1/Ex5quHrNpos=; b=XvNbWc7mYvL2dH6RqbZw0xGO33Sb5Gw9ATzuuYpjJ4NWuairJ3wlWau7Fw7Lqemupu h0jE3Zx8w3AM2YXhe3lXR7gHOvf86aqlzD8dxHEql7JsAykRsFJ9X+QlTzBkBlltQQSb ZgvsPa5ClME0v+kfTAtYUFPXfME80Bpk8HzsI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=uuMN7taEyugZgCL3QRokk7VgbTjrrx1/Ex5quHrNpos=; b=HLXDP7jhqssP42TRk7KxALIWHzBFf89IpEPiH58ui/aFEYXEA04vipfgrQvlh0VPlv FgpNNlB66Ma3HHI7SeZ0PSPTT9RtV0PZrRR1kfTvoZsQZD15Xo5JhOHoXAmHzJscekhb KDfdR38dldS30/yjIREOkIlT2POv3MMhwSYmMUkXhwEXxfOmggQEqDVRHm8T6bG4q2W4 O/T8hXj0pZCfvI9kuaZI/Ed3FOCXbI+B2RNZTs+723eQHrghSVARCrWW9QDod5EjxyS9 CdinNwESB/GNIriibOJwIomKtfHoa0I9Lm8h+TA5DY2iDiRECDsyZOS+CNdT7/rWeuDt lJIA== X-Gm-Message-State: AN3rC/44m2zDvlVGwBJmrguyrROUDAFpzhXZGV6sjgcUMekqatNdSu60 7nayVFEEmzp7207G X-Received: by 10.28.174.195 with SMTP id x186mr9558473wme.129.1493042590641; Mon, 24 Apr 2017 07:03:10 -0700 (PDT) Received: from localhost.localdomain (lft31-1-88-121-166-205.fbx.proxad.net. [88.121.166.205]) by smtp.gmail.com with ESMTPSA id s137sm408565wme.31.2017.04.24.07.03.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 24 Apr 2017 07:03:10 -0700 (PDT) From: Daniel Lezcano To: tglx@linutronix.de Cc: Peter Zijlstra , "Rafael J. Wysocki" , Vincent Guittot , Nicolas Pitre , linux-kernel@vger.kernel.org Subject: [PATCH V9 3/3] irq: Compute the periodic interval for interrupts Date: Mon, 24 Apr 2017 16:01:33 +0200 Message-Id: <1493042494-14057-3-git-send-email-daniel.lezcano@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1493042494-14057-1-git-send-email-daniel.lezcano@linaro.org> References: <1493042494-14057-1-git-send-email-daniel.lezcano@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org An interrupt behaves with a burst of activity with periodic interval of time followed by one or two peaks of longer interval. As the time intervals are periodic, statistically speaking they follow a normal distribution and each interrupts can be tracked individually. This patch does statistics on all interrupts, except the timers which are deterministic by essence. The goal is to extract the periodicity for each interrupt, with the last timestamp and sum them, so we have the next event. Taking the earliest prediction gives the expected wakeup on the system (assuming a timer won't expire before). As stated in the previous patch, this code is not enabled in the kernel by default. Signed-off-by: Daniel Lezcano Cc: Peter Zijlstra Cc: Rafael J. Wysocki Cc: Vincent Guittot Cc: Nicolas Pitre --- Changelog: V9: - Deal with 48+16 bits encoded values - Changed irq_stat => irqt_stat to prevent name collision on s390 - Changed div64 by constant IRQ_TIMINGS_SHIFT bit shift for average - Changed div64 by constant IRQ_TIMINGS_SHIFT bit shift for variance --- include/linux/interrupt.h | 1 + kernel/irq/internals.h | 19 +++ kernel/irq/timings.c | 348 ++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 368 insertions(+) -- 1.9.1 diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h index 853aef7..5d4e43a 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h @@ -715,6 +715,7 @@ static inline void init_irq_proc(void) #ifdef CONFIG_IRQ_TIMINGS void irq_timings_enable(void); void irq_timings_disable(void); +u64 irq_timings_next_event(u64 now); #endif struct seq_file; diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h index df51b5e0..1f56c3d 100644 --- a/kernel/irq/internals.h +++ b/kernel/irq/internals.h @@ -242,13 +242,21 @@ struct irq_timings { DECLARE_PER_CPU(struct irq_timings, irq_timings); +extern void irq_timings_free(int irq); +extern int irq_timings_alloc(int irq); + static inline void remove_timings(struct irq_desc *desc) { desc->istate &= ~IRQS_TIMINGS; + + irq_timings_free(irq_desc_get_irq(desc)); } static inline void setup_timings(struct irq_desc *desc, struct irqaction *act) { + int irq = irq_desc_get_irq(desc); + int ret; + /* * We don't need the measurement because the idle code already * knows the next expiry event. @@ -256,6 +264,17 @@ static inline void setup_timings(struct irq_desc *desc, struct irqaction *act) if (act->flags & __IRQF_TIMER) return; + /* + * In case the timing allocation fails, we just want to warn, + * not fail, so letting the system boot anyway. + */ + ret = irq_timings_alloc(irq); + if (ret) { + pr_warn("Failed to allocate irq timing stats for irq%d (%d)", + irq, ret); + return; + } + desc->istate |= IRQS_TIMINGS; } diff --git a/kernel/irq/timings.c b/kernel/irq/timings.c index 56cf687..04d62b3 100644 --- a/kernel/irq/timings.c +++ b/kernel/irq/timings.c @@ -9,9 +9,14 @@ * */ #include +#include #include #include +#include #include +#include + +#include #include "internals.h" @@ -19,6 +24,18 @@ DEFINE_PER_CPU(struct irq_timings, irq_timings); +struct irqt_stat { + u64 ne; /* next event */ + u64 lts; /* last timestamp */ + u64 variance; /* variance */ + u32 avg; /* mean value */ + u32 count; /* number of samples */ + int anomalies; /* number of consecutives anomalies */ + int valid; /* behaviour of the interrupt */ +}; + +static DEFINE_IDR(irqt_stats); + void irq_timings_enable(void) { static_branch_enable(&irq_timing_enabled); @@ -28,3 +45,334 @@ void irq_timings_disable(void) { static_branch_disable(&irq_timing_enabled); } + +/** + * irqs_update - update the irq timing statistics with a new timestamp + * + * @irqs: an irqt_stat struct pointer + * @ts: the new timestamp + * + * ** This function must be called with the local irq disabled ** + * + * The statistics are computed online, in other words, the code is + * designed to compute the statistics on a stream of values rather + * than doing multiple passes on the values to compute the average, + * then the variance. The integer division introduces a loss of + * precision but with an acceptable error margin regarding the results + * we would have with the double floating precision: we are dealing + * with nanosec, so big numbers, consequently the mantisse is + * negligeable, especially when converting the time in usec + * afterwards. + * + * The computation happens at idle time. When the CPU is not idle, the + * interrupts' timestamps are stored in the circular buffer, when the + * CPU goes idle and this routine is called, all the buffer's values + * are injected in the statistical model continuying to extend the + * statistics from the previous busy-idle cycle. + * + * The observations showed a device will trigger a burst of periodic + * interrupts followed by one or two peaks of longer time, for + * instance when a SD card device flushes its cache, then the periodic + * intervals occur again. A one second inactivity period resets the + * stats, that gives us the certitude the statistical values won't + * exceed 1x10^9, thus the computation won't overflow. + * + * Basically, the purpose of the algorithm is to watch the periodic + * interrupts and eliminate the peaks. + * + * An interrupt is considered periodically stable if the interval of + * its occurences follow the normal distribution, thus the values + * comply with: + * + * avg - 3 x stddev < value < avg + 3 x stddev + * + * Which can be simplified to: + * + * -3 x stddev < value - avg < 3 x stddev + * + * abs(value - avg) < 3 x stddev + * + * In order to save a costly square root computation, we use the + * variance. For the record, stddev = sqrt(variance). The equation + * above becomes: + * + * abs(value - avg) < 3 x sqrt(variance) + * + * And finally we square it: + * + * (value - avg) ^ 2 < (3 x sqrt(variance)) ^ 2 + * + * (value - avg) x (value - avg) < 9 x variance + * + * Statistically speaking, any values out of this interval is + * considered as an anomaly and is discarded. However, a normal + * distribution appears when the number of samples is 30 (it is the + * rule of thumb in statistics, cf. "30 samples" on Internet). When + * there are three consecutive anomalies, the statistics are resetted. + * + */ +static void irqs_update(struct irqt_stat *irqs, u64 ts) +{ + u64 old_ts = irqs->lts; + u64 variance = 0; + u64 interval; + s64 diff; + + /* + * The timestamps are absolute time values, we need to compute + * the timing interval between two interrupts. + */ + irqs->lts = ts; + + /* + * The interval type is u64 in order to deal with the same + * type in our computation, that prevent mindfuck issues with + * overflow, sign and division. + */ + interval = ts - old_ts; + + /* + * The interrupt triggered more than one second apart, that + * ends the sequence as predictible for our purpose. In this + * case, assume we have the beginning of a sequence and the + * timestamp is the first value. As it is impossible to + * predict anything at this point, return. + * + * Note the first timestamp of the sequence will always fall + * in this test because the old_ts is zero. That is what we + * want as we need another timestamp to compute an interval. + */ + if (interval >= NSEC_PER_SEC) { + memset(irqs, 0, sizeof(*irqs)); + irqs->lts = ts; + return; + } + + /* + * Pre-compute the delta with the average as the result is + * used several times in this function. + */ + diff = interval - irqs->avg; + + /* + * Increment the number of samples. + */ + irqs->count++; + + /* + * Online variance divided by the number of elements if there + * is more than one sample. Normally the formula is division + * by count - 1 but we assume the number of element will be + * more than 32 and dividing by 32 instead of 31 is enough + * precise. + */ + if (likely(irqs->count > 1)) + variance = irqs->variance >> IRQ_TIMINGS_SHIFT; + + /* + * The rule of thumb in statistics for the normal distribution + * is having at least 30 samples in order to have the model to + * apply. Values outside the interval are considered as an + * anomaly. + */ + if ((irqs->count >= 30) && ((diff * diff) > (9 * variance))) { + /* + * After three consecutive anomalies, we reset the + * stats as it is no longer stable enough. + */ + if (irqs->anomalies++ >= 3) { + memset(irqs, 0, sizeof(*irqs)); + irqs->lts = ts; + return; + } + } else { + /* + * The anomalies must be consecutives, so at this + * point, we reset the anomalies counter. + */ + irqs->anomalies = 0; + } + + /* + * The interrupt is considered stable enough to try to predict + * the next event on it. + */ + irqs->valid = 1; + + /* + * Online average algorithm: + * + * new_average = average + ((value - average) / count) + * + * The variance computation depends on the new average + * to be computed here first. + * + */ + irqs->avg = irqs->avg + (diff >> IRQ_TIMINGS_SHIFT); + + /* + * Online variance algorithm: + * + * new_variance = variance + (value - average) x (value - new_average) + * + * Warning: irqs->avg is updated with the line above, hence + * 'interval - irqs->avg' is no longer equal to 'diff' + */ + irqs->variance = irqs->variance + (diff * (interval - irqs->avg)); + + /* + * Update the next event + */ + irqs->ne = ts + irqs->avg; +} + +/** + * irq_timings_next_event - Return when the next event is supposed to arrive + * + * *** This function must be called with the local irq disabled *** + * + * During the last busy cycle, the number of interrupts is incremented + * and stored in the irq_timings structure. This information is + * necessary to: + * + * - know if the index in the table wrapped up: + * + * If more than the array size interrupts happened during the + * last busy/idle cycle, the index wrapped up and we have to + * begin with the next element in the array which is the last one + * in the sequence, otherwise it is a the index 0. + * + * - have an indication of the interrupts activity on this CPU + * (eg. irq/sec) + * + * The values are 'consumed' after inserting in the statistical model, + * thus the count is reinitialized. + * + * The array of values **must** be browsed in the time direction, the + * timestamp must increase between an element and the next one. + * + * Returns a nanosec time based estimation of the earliest interrupt, + * U64_MAX otherwise. + */ +u64 irq_timings_next_event(u64 now) +{ + struct irq_timings *irqts = this_cpu_ptr(&irq_timings); + struct irqt_stat *irqs; + struct irqt_stat __percpu *s; + u64 ts, ne = U64_MAX; + int index, count, i, irq = 0; + + /* + * Number of elements in the circular buffer. If it happens it + * was flushed before, then the number of elements could be + * smaller than IRQ_TIMINGS_SIZE, so the count is used, + * otherwise the array size is used as we wrapped. The index + * begins from zero when we did not wrap. That could be done + * in a nicer way with the proper circular array structure + * type but with the cost of extra computation in the + * interrupt handler hot path. We choose efficiency. + */ + if (irqts->count >= IRQ_TIMINGS_SIZE) { + count = IRQ_TIMINGS_SIZE; + index = irqts->count & IRQ_TIMINGS_MASK; + } else { + count = irqts->count; + index = 0; + } + + /* + * Inject measured irq/timestamp to the statistical model. + */ + for (i = 0; i < count; i++) { + + ts = irqts->values[(index + i) & IRQ_TIMINGS_MASK]; + + irq_timing_decode(ts, &ts, &irq); + + s = idr_find(&irqt_stats, irq); + if (s) { + irqs = this_cpu_ptr(s); + irqs_update(irqs, ts); + } + } + + /* + * Reset the counter, we consumed all the data from our + * circular buffer. + */ + irqts->count = 0; + + /* + * Look in the list of interrupts' statistics, the earliest + * next event. + */ + idr_for_each_entry(&irqt_stats, s, i) { + + irqs = this_cpu_ptr(s); + + if (!irqs->valid) + continue; + + if (irqs->ne <= now) { + irq = i; + ne = now; + + /* + * This interrupt mustn't use in the future + * until new events occur and update the + * statistics. + */ + irqs->valid = 0; + break; + } + + if (irqs->ne < ne) { + irq = i; + ne = irqs->ne; + } + } + + return ne; +} + +void irq_timings_free(int irq) +{ + struct irqt_stat __percpu *s; + + s = idr_find(&irqt_stats, irq); + if (s) { + free_percpu(s); + idr_remove(&irqt_stats, irq); + } +} + +int irq_timings_alloc(int irq) +{ + int id; + struct irqt_stat __percpu *s; + + /* + * Some platforms can have the same private interrupt per cpu, + * so this function may be be called several times with the + * same interrupt number. Just bail out in case the per cpu + * stat structure is already allocated. + */ + s = idr_find(&irqt_stats, irq); + if (s) + return 0; + + s = alloc_percpu(*s); + if (!s) + return -ENOMEM; + + idr_preload(GFP_KERNEL); + id = idr_alloc(&irqt_stats, s, irq, irq + 1, GFP_NOWAIT); + idr_preload_end(); + + if (id < 0) { + free_percpu(s); + return id; + } + + return 0; +}