From patchwork Wed Apr 13 11:05:56 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Lezcano X-Patchwork-Id: 65711 Delivered-To: patch@linaro.org Received: by 10.140.93.198 with SMTP id d64csp2434975qge; Wed, 13 Apr 2016 04:06:24 -0700 (PDT) X-Received: by 10.98.2.22 with SMTP id 22mr11934707pfc.36.1460545583609; Wed, 13 Apr 2016 04:06:23 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s64si793007pfj.65.2016.04.13.04.06.23; Wed, 13 Apr 2016 04:06:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1030562AbcDMLGV (ORCPT + 29 others); Wed, 13 Apr 2016 07:06:21 -0400 Received: from mail-wm0-f48.google.com ([74.125.82.48]:37651 "EHLO mail-wm0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934845AbcDMLGT (ORCPT ); Wed, 13 Apr 2016 07:06:19 -0400 Received: by mail-wm0-f48.google.com with SMTP id n3so71005413wmn.0 for ; Wed, 13 Apr 2016 04:06:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=bAGNujwzl6Tm1R1pfI6HyNmNxUxT0R2Sx11FGjq1Skc=; b=QgkVGyNHHzHDh+F2w9EeZnaq7p6zSMH4mHIHjUi+oLMAMyHs0q/7nHe0sgabcLiw5d Ty3Y0HulgmwJwGan6ZsWVWjwWzhba5AkhtrLwMjyU+tk3xW/qM2dMw3dIehFJ9QcqmWS 6RPiXPt0VUTzbIsCQFigFeJK4qVPtSlWNvz4w= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=bAGNujwzl6Tm1R1pfI6HyNmNxUxT0R2Sx11FGjq1Skc=; b=ZXNQ7GHjDjxub+1p9YwUSLi0FFG7XrFHrlaT7bLTjGk5qHjl+cMiQSchP0LN8nwEpz kpqBVqSmYqesnIWHgn+aDvKyJtLdNV84zmBXELxUfLrOu9nBG5RcQv1zEUFwdCWVm/RQ q4On0CRvkauOlVPnj5dEjcX7UJB2ACGuiluv/3haGbZK1IdaQ8Y8nLXOTnTQYzJcbqID 4Jl92ULVgVi/4v9aBr+dWWaLjnyA5mXdHlNTFqurTY+bFYI6Eefwh0pTaPGMfzUvzwzx 02EJDencKFefl+haRWQ2DvFy3VQ+UN/CXUslZbQ8XQ3fb4MNqyULvqkrjHS048UlMU6f l1ZQ== X-Gm-Message-State: AOPr4FXkmkwZyzV0J1qv8UJ/BuhYiTwe3a67zRdJnLSGiN3PQUzZvC+9C/VjZVO1Gfv9SkD4 X-Received: by 10.195.12.162 with SMTP id er2mr9103348wjd.39.1460545577390; Wed, 13 Apr 2016 04:06:17 -0700 (PDT) Received: from localhost.localdomain (sju31-1-78-210-255-2.fbx.proxad.net. [78.210.255.2]) by smtp.gmail.com with ESMTPSA id r2sm24496236wjm.8.2016.04.13.04.06.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 13 Apr 2016 04:06:16 -0700 (PDT) From: Daniel Lezcano To: daniel.lezcano@linaro.org, tglx@linutronix.de Cc: nicolas.pitre@linaro.org, shreyas@linux.vnet.ibm.com, linux-kernel@vger.kernel.org, peterz@infradead.org, rafael@kernel.org, vincent.guittot@linaro.org Subject: [PATCH V4] irq: Track the interrupt timings Date: Wed, 13 Apr 2016 13:05:56 +0200 Message-Id: <1460545556-15085-1-git-send-email-daniel.lezcano@linaro.org> X-Mailer: git-send-email 1.9.1 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The interrupt framework gives a lot of information about each interrupt. It does not keep track of when those interrupts occur though. This patch provides a mean to record the elapsed time between successive interrupt occurrences in a per-IRQ per-CPU circular buffer to help with the prediction of the next occurrence using a statistical model. A new function is added to browse the different interrupts and retrieve the timing information stored in it. A static key is introduced so when the irq prediction is switched off at runtime, we can reduce the overhead near to zero. The irq timings is supposed to be potentially used by different sub-systems and for this reason the static key is a ref counter, so when the last use releases the irq timings that will result on the effective deactivation of the irq measurement. Signed-off-by: Daniel Lezcano Acked-by: Nicolas Pitre --- V4: - Added a static key - Added more comments for irq_timings_get_next() - Unified some function names to be prefixed by 'irq_timings_...' - Fixed a rebase error V3: - Replaced ktime_get() by local_clock() - Shared irq are not handled - Simplified code by adding the timing in the irqdesc struct - Added a function to browse the irq timings V2: - Fixed kerneldoc comment - Removed data field from the struct irq timing - Changed the lock section comment - Removed semi-colon style with empty stub - Replaced macro by static inline - Fixed static functions declaration RFC: - initial posting --- include/linux/interrupt.h | 18 +++++++ include/linux/irqdesc.h | 4 ++ kernel/irq/Kconfig | 3 ++ kernel/irq/Makefile | 1 + kernel/irq/handle.c | 2 + kernel/irq/internals.h | 59 ++++++++++++++++++++ kernel/irq/irqdesc.c | 6 +++ kernel/irq/manage.c | 3 ++ kernel/irq/timings.c | 133 ++++++++++++++++++++++++++++++++++++++++++++++ 9 files changed, 229 insertions(+) create mode 100644 kernel/irq/timings.c -- 1.9.1 diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h index 0e95fcc..11a8d20 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h @@ -665,6 +665,24 @@ static inline void init_irq_proc(void) } #endif +#ifdef CONFIG_IRQ_TIMINGS + +#define IRQ_TIMINGS_SHIFT 3 +#define IRQ_TIMINGS_SIZE (1 << IRQ_TIMINGS_SHIFT) +#define IRQ_TIMINGS_MASK (IRQ_TIMINGS_SIZE - 1) + +struct irq_timings { + u32 values[IRQ_TIMINGS_SIZE]; /* our circular buffer */ + u64 sum; /* sum of values */ + u64 timestamp; /* latest timestamp */ + unsigned int w_index; /* current buffer index */ +}; + +struct irq_timings *irq_timings_get_next(int *irq); +void irq_timings_get(void); +void irq_timings_put(void); +#endif + struct seq_file; int show_interrupts(struct seq_file *p, void *v); int arch_show_interrupts(struct seq_file *p, int prec); diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h index dcca77c..f4e29b2 100644 --- a/include/linux/irqdesc.h +++ b/include/linux/irqdesc.h @@ -12,6 +12,7 @@ struct proc_dir_entry; struct module; struct irq_desc; struct irq_domain; +struct irq_timings; struct pt_regs; /** @@ -51,6 +52,9 @@ struct irq_desc { struct irq_data irq_data; unsigned int __percpu *kstat_irqs; irq_flow_handler_t handle_irq; +#ifdef CONFIG_IRQ_TIMINGS + struct irq_timings __percpu *timings; +#endif #ifdef CONFIG_IRQ_PREFLOW_FASTEOI irq_preflow_handler_t preflow_handler; #endif diff --git a/kernel/irq/Kconfig b/kernel/irq/Kconfig index 3bbfd6a..38e551d 100644 --- a/kernel/irq/Kconfig +++ b/kernel/irq/Kconfig @@ -81,6 +81,9 @@ config GENERIC_MSI_IRQ_DOMAIN config HANDLE_DOMAIN_IRQ bool +config IRQ_TIMINGS + bool + config IRQ_DOMAIN_DEBUG bool "Expose hardware/virtual IRQ mapping via debugfs" depends on IRQ_DOMAIN && DEBUG_FS diff --git a/kernel/irq/Makefile b/kernel/irq/Makefile index 2ee42e9..e1debaa9 100644 --- a/kernel/irq/Makefile +++ b/kernel/irq/Makefile @@ -9,3 +9,4 @@ obj-$(CONFIG_GENERIC_IRQ_MIGRATION) += cpuhotplug.o obj-$(CONFIG_PM_SLEEP) += pm.o obj-$(CONFIG_GENERIC_MSI_IRQ) += msi.o obj-$(CONFIG_GENERIC_IRQ_IPI) += ipi.o +obj-$(CONFIG_IRQ_TIMINGS) += timings.o diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c index a15b548..cd37536 100644 --- a/kernel/irq/handle.c +++ b/kernel/irq/handle.c @@ -138,6 +138,8 @@ irqreturn_t handle_irq_event_percpu(struct irq_desc *desc) unsigned int flags = 0, irq = desc->irq_data.irq; struct irqaction *action; + handle_timings(desc); + for_each_action_of_desc(desc, action) { irqreturn_t res; diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h index eab521fc..781ecbf 100644 --- a/kernel/irq/internals.h +++ b/kernel/irq/internals.h @@ -56,6 +56,7 @@ enum { IRQS_WAITING = 0x00000080, IRQS_PENDING = 0x00000200, IRQS_SUSPENDED = 0x00000800, + IRQS_TIMINGS = 0x00001000, }; #include "debug.h" @@ -218,3 +219,61 @@ irq_pm_install_action(struct irq_desc *desc, struct irqaction *action) { } static inline void irq_pm_remove_action(struct irq_desc *desc, struct irqaction *action) { } #endif + +#ifdef CONFIG_IRQ_TIMINGS +static inline int alloc_timings(struct irq_desc *desc) +{ + desc->timings = alloc_percpu(struct irq_timings); + if (!desc->timings) + return -ENOMEM; + + return 0; +} + +static inline void free_timings(struct irq_desc *desc) +{ + free_percpu(desc->timings); +} + +static inline void remove_timings(struct irq_desc *desc) +{ + desc->istate &= ~IRQS_TIMINGS; +} + +static inline void setup_timings(struct irq_desc *desc, struct irqaction *act) +{ + /* + * Timers are deterministic, so no need to do any measurement + * on them. + */ + if (act->flags & __IRQF_TIMER) + return; + + desc->istate |= IRQS_TIMINGS; +} + +extern struct static_key_false irq_timing_enabled; + +extern void __handle_timings(struct irq_desc *desc); + +/* + * The function handle_timings is only called in one place in the + * interrupts handler. We want this function always inline so the + * code inside is embedded in the function and the static key branching + * code can act at the higher level. Without the explicit __always_inline + * we can end up with a call to the 'handle_timings' function with a small + * overhead in the hotpath for nothing. + */ +static __always_inline void handle_timings(struct irq_desc *desc) +{ + if (static_key_enabled(&irq_timing_enabled)) + __handle_timings(desc); +} +#else +static inline int alloc_timings(struct irq_desc *desc) { return 0; } +static inline void free_timings(struct irq_desc *desc) {} +static inline void handle_timings(struct irq_desc *desc) {} +static inline void remove_timings(struct irq_desc *desc) {} +static inline void setup_timings(struct irq_desc *desc, + struct irqaction *act) {}; +#endif diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c index 0ccd028..bd74bc7 100644 --- a/kernel/irq/irqdesc.c +++ b/kernel/irq/irqdesc.c @@ -174,6 +174,9 @@ static struct irq_desc *alloc_desc(int irq, int node, struct module *owner) if (alloc_masks(desc, gfp, node)) goto err_kstat; + if (alloc_timings(desc)) + goto err_mask; + raw_spin_lock_init(&desc->lock); lockdep_set_class(&desc->lock, &irq_desc_lock_class); init_rcu_head(&desc->rcu); @@ -182,6 +185,8 @@ static struct irq_desc *alloc_desc(int irq, int node, struct module *owner) return desc; +err_mask: + free_masks(desc); err_kstat: free_percpu(desc->kstat_irqs); err_desc: @@ -193,6 +198,7 @@ static void delayed_free_desc(struct rcu_head *rhp) { struct irq_desc *desc = container_of(rhp, struct irq_desc, rcu); + free_timings(desc); free_masks(desc); free_percpu(desc->kstat_irqs); kfree(desc); diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index 3ddd229..132c2d7 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -1343,6 +1343,8 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new) __enable_irq(desc); } + setup_timings(desc, new); + raw_spin_unlock_irqrestore(&desc->lock, flags); /* @@ -1465,6 +1467,7 @@ static struct irqaction *__free_irq(unsigned int irq, void *dev_id) irq_settings_clr_disable_unlazy(desc); irq_shutdown(desc); irq_release_resources(desc); + remove_timings(desc); } #ifdef CONFIG_SMP diff --git a/kernel/irq/timings.c b/kernel/irq/timings.c new file mode 100644 index 0000000..cb91a75 --- /dev/null +++ b/kernel/irq/timings.c @@ -0,0 +1,133 @@ +/* + * linux/kernel/irq/timings.c + * + * Copyright (C) 2016, Linaro Ltd - Daniel Lezcano + * + */ +#include +#include +#include +#include +#include + +#include "internals.h" + +DEFINE_STATIC_KEY_FALSE(irq_timing_enabled); + +void irq_timings_get(void) +{ + static_branch_inc(&irq_timing_enabled); +} + +void irq_timings_put(void) +{ + static_branch_dec(&irq_timing_enabled); +} + +/** + * __handle_timings - stores an irq timing when an interrupt occurs + * + * @desc: the irq descriptor + * + * For all interruptions with their IRQS_TIMINGS flag set, the function + * computes the time interval between two interrupt events and store it + * in a circular buffer. + */ +void __handle_timings(struct irq_desc *desc) +{ + struct irq_timings *timings; + u64 prev, now, diff; + + if (!(desc->istate & IRQS_TIMINGS)) + return; + + timings = this_cpu_ptr(desc->timings); + now = local_clock(); + prev = timings->timestamp; + timings->timestamp = now; + + /* + * If it is the first interrupt of the series, we can't + * compute an interval, just store the timestamp and exit. + */ + if (unlikely(!prev)) + return; + + diff = now - prev; + + /* + * microsec (actually 1024th of a milisec) precision is good + * enough for our purpose. + */ + diff >>= 10; + + /* + * There is no point to store intervals from interrupts more + * than ~1 second apart. Furthermore that increases the risk + * of overflowing our variance computation. Reset all values + * in that case. Otherwise we know the magnitude of diff is + * well within 32 bits. + */ + if (unlikely(diff > USEC_PER_SEC)) { + memset(timings, 0, sizeof(*timings)); + timings->timestamp = now; + return; + } + + /* The oldest value corresponds to the next index. */ + timings->w_index = (timings->w_index + 1) & IRQ_TIMINGS_MASK; + + /* + * Remove the oldest value from the summing. If this is the + * first time we go through this array slot, the previous + * value will be zero and we won't substract anything from the + * current sum. Hence this code relies on a zero-ed structure. + */ + timings->sum -= timings->values[timings->w_index]; + timings->values[timings->w_index] = diff; + timings->sum += diff; +} + +/** + * irqtiming_get_next - return the next irq timing + * + * @irq: a pointer to an integer representing the interrupt number + * + * This function allows to browse safely the interrupt descriptors in order + * to retrieve the interrupts timings. The parameter gives the interrupt + * number to begin with and will return the interrupt timings for the next + * allocated irq. This approach gives us the possibility to go through the + * different interrupts without having to handle the sparse irq. + * + * The function changes @irq to the next allocated irq + 1, it should be + * passed back again and again until NULL is returned. Usually this function + * is called the first time with @irq = 0. + * + * Returns a struct irq_timings, NULL if we reach the end of the interrupts + * list. + */ +struct irq_timings *irq_timings_get_next(int *irq) +{ + struct irq_desc *desc; + int next; + +again: + /* Do a racy lookup of the next allocated irq */ + next = irq_get_next_irq(*irq); + if (next >= nr_irqs) + return NULL; + + *irq = next + 1; + + /* + * Now lookup the descriptor. It's RCU protected. This + * descriptor might belong to an uninteresting interrupt or + * one that is not measured. Look for the next interrupt in + * that case. + */ + desc = irq_to_desc(next); + if (!desc || !(desc->istate & IRQS_TIMINGS)) + goto again; + + return this_cpu_ptr(desc->timings); +}