From patchwork Tue Mar 28 06:35:37 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dietmar Eggemann X-Patchwork-Id: 96104 Delivered-To: patch@linaro.org Received: by 10.140.89.233 with SMTP id v96csp1566343qgd; Mon, 27 Mar 2017 23:36:00 -0700 (PDT) X-Received: by 10.84.225.17 with SMTP id t17mr18699651plj.153.1490682959958; Mon, 27 Mar 2017 23:35:59 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h90si3199659plb.118.2017.03.27.23.35.59; Mon, 27 Mar 2017 23:35:59 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754535AbdC1Gf6 (ORCPT + 19 others); Tue, 28 Mar 2017 02:35:58 -0400 Received: from foss.arm.com ([217.140.101.70]:43124 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754494AbdC1Gf5 (ORCPT ); Tue, 28 Mar 2017 02:35:57 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4BD06344; Mon, 27 Mar 2017 23:35:56 -0700 (PDT) Received: from e107985-lin.cambridge.arm.com (e107985-lin.cambridge.arm.com [10.1.210.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D64F83F59A; Mon, 27 Mar 2017 23:35:54 -0700 (PDT) From: Dietmar Eggemann To: Peter Zijlstra , Ingo Molnar Cc: LKML , Matt Fleming , Vincent Guittot , Steven Rostedt , Morten Rasmussen , Juri Lelli , Patrick Bellasi Subject: [RFC PATCH 1/5] sched/autogroup: Define autogroup_path() for !CONFIG_SCHED_DEBUG Date: Tue, 28 Mar 2017 07:35:37 +0100 Message-Id: <20170328063541.12912-2-dietmar.eggemann@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170328063541.12912-1-dietmar.eggemann@arm.com> References: <20170328063541.12912-1-dietmar.eggemann@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Define autogroup_path() even in the !CONFIG_SCHED_DEBUG case. If CONFIG_SCHED_AUTOGROUP is enabled the path of an autogroup has to be available to be printed in the load tracking trace events provided by this patch-stack regardless whether CONFIG_SCHED_DEBUG is set or not. Signed-off-by: Dietmar Eggemann Cc: Peter Zijlstra Cc: Ingo Molnar --- kernel/sched/autogroup.c | 2 -- kernel/sched/autogroup.h | 2 -- 2 files changed, 4 deletions(-) -- 2.11.0 diff --git a/kernel/sched/autogroup.c b/kernel/sched/autogroup.c index da39489d2d80..22be4eaf6ae2 100644 --- a/kernel/sched/autogroup.c +++ b/kernel/sched/autogroup.c @@ -260,7 +260,6 @@ void proc_sched_autogroup_show_task(struct task_struct *p, struct seq_file *m) } #endif /* CONFIG_PROC_FS */ -#ifdef CONFIG_SCHED_DEBUG int autogroup_path(struct task_group *tg, char *buf, int buflen) { if (!task_group_is_autogroup(tg)) @@ -268,4 +267,3 @@ int autogroup_path(struct task_group *tg, char *buf, int buflen) return snprintf(buf, buflen, "%s-%ld", "/autogroup", tg->autogroup->id); } -#endif /* CONFIG_SCHED_DEBUG */ diff --git a/kernel/sched/autogroup.h b/kernel/sched/autogroup.h index ce40c810cd5c..6a661cfa9584 100644 --- a/kernel/sched/autogroup.h +++ b/kernel/sched/autogroup.h @@ -55,11 +55,9 @@ autogroup_task_group(struct task_struct *p, struct task_group *tg) return tg; } -#ifdef CONFIG_SCHED_DEBUG static inline int autogroup_path(struct task_group *tg, char *buf, int buflen) { return 0; } -#endif #endif /* CONFIG_SCHED_AUTOGROUP */ From patchwork Tue Mar 28 06:35:38 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dietmar Eggemann X-Patchwork-Id: 96107 Delivered-To: patch@linaro.org Received: by 10.140.89.233 with SMTP id v96csp1566536qgd; Mon, 27 Mar 2017 23:36:47 -0700 (PDT) X-Received: by 10.98.78.7 with SMTP id c7mr29317550pfb.138.1490683007612; Mon, 27 Mar 2017 23:36:47 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k21si3210982pgg.16.2017.03.27.23.36.47; Mon, 27 Mar 2017 23:36:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754607AbdC1Ggk (ORCPT + 19 others); Tue, 28 Mar 2017 02:36:40 -0400 Received: from foss.arm.com ([217.140.101.70]:43134 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754474AbdC1Ggh (ORCPT ); Tue, 28 Mar 2017 02:36:37 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F142F1596; Mon, 27 Mar 2017 23:35:57 -0700 (PDT) Received: from e107985-lin.cambridge.arm.com (e107985-lin.cambridge.arm.com [10.1.210.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 874843F59A; Mon, 27 Mar 2017 23:35:56 -0700 (PDT) From: Dietmar Eggemann To: Peter Zijlstra , Ingo Molnar Cc: LKML , Matt Fleming , Vincent Guittot , Steven Rostedt , Morten Rasmussen , Juri Lelli , Patrick Bellasi Subject: [RFC PATCH 2/5] sched/events: Introduce cfs_rq load tracking trace event Date: Tue, 28 Mar 2017 07:35:38 +0100 Message-Id: <20170328063541.12912-3-dietmar.eggemann@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170328063541.12912-1-dietmar.eggemann@arm.com> References: <20170328063541.12912-1-dietmar.eggemann@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The trace event keys load and util (utilization) are mapped to: (1) load : cfs_rq->runnable_load_avg (2) util : cfs_rq->avg.util_avg To let this trace event work for configurations w/ and w/o group scheduling support for cfs (CONFIG_FAIR_GROUP_SCHED) the following special handling is necessary for non-existent key=value pairs: path = "(null)" : In case of !CONFIG_FAIR_GROUP_SCHED. id = -1 : In case of !CONFIG_FAIR_GROUP_SCHED. The following list shows examples of the key=value pairs in different configurations for: (1) a root task_group: cpu=4 path=/ id=1 load=6 util=331 (2) a task_group: cpu=1 path=/tg1/tg11/tg111 id=4 load=538 util=522 (3) an autogroup: cpu=3 path=/autogroup-18 id=0 load=997 util=517 (4) w/o CONFIG_FAIR_GROUP_SCHED: cpu=0 path=(null) id=-1 load=314 util=289 The trace event is only defined for CONFIG_SMP. The helper function __trace_sched_path() can be used to get the length parameter of the dynamic array (path == NULL) and to copy the path into it (path != NULL). Signed-off-by: Dietmar Eggemann Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Steven Rostedt --- include/trace/events/sched.h | 73 ++++++++++++++++++++++++++++++++++++++++++++ kernel/sched/fair.c | 9 ++++++ 2 files changed, 82 insertions(+) -- 2.11.0 Signed-off-by: Peter Zijlstra (Intel) diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h index 9e3ef6c99e4b..51db8a90e45f 100644 --- a/include/trace/events/sched.h +++ b/include/trace/events/sched.h @@ -562,6 +562,79 @@ TRACE_EVENT(sched_wake_idle_without_ipi, TP_printk("cpu=%d", __entry->cpu) ); + +#ifdef CONFIG_SMP +#ifdef CREATE_TRACE_POINTS +static inline +int __trace_sched_cpu(struct cfs_rq *cfs_rq) +{ +#ifdef CONFIG_FAIR_GROUP_SCHED + struct rq *rq = cfs_rq->rq; +#else + struct rq *rq = container_of(cfs_rq, struct rq, cfs); +#endif + return cpu_of(rq); +} + +static inline +int __trace_sched_path(struct cfs_rq *cfs_rq, char *path, int len) +{ +#ifdef CONFIG_FAIR_GROUP_SCHED + int l = path ? len : 0; + + if (task_group_is_autogroup(cfs_rq->tg)) + return autogroup_path(cfs_rq->tg, path, l) + 1; + else + return cgroup_path(cfs_rq->tg->css.cgroup, path, l) + 1; +#else + if (path) + strcpy(path, "(null)"); + + return strlen("(null)"); +#endif +} + +static inline int __trace_sched_id(struct cfs_rq *cfs_rq) +{ +#ifdef CONFIG_FAIR_GROUP_SCHED + return cfs_rq->tg->css.id; +#else + return -1; +#endif +} +#endif /* CREATE_TRACE_POINTS */ + +/* + * Tracepoint for cfs_rq load tracking: + */ +TRACE_EVENT(sched_load_cfs_rq, + + TP_PROTO(struct cfs_rq *cfs_rq), + + TP_ARGS(cfs_rq), + + TP_STRUCT__entry( + __field( int, cpu ) + __dynamic_array(char, path, + __trace_sched_path(cfs_rq, NULL, 0) ) + __field( int, id ) + __field( unsigned long, load ) + __field( unsigned long, util ) + ), + + TP_fast_assign( + __entry->cpu = __trace_sched_cpu(cfs_rq); + __trace_sched_path(cfs_rq, __get_dynamic_array(path), + __get_dynamic_array_len(path)); + __entry->id = __trace_sched_id(cfs_rq); + __entry->load = cfs_rq->runnable_load_avg; + __entry->util = cfs_rq->avg.util_avg; + ), + + TP_printk("cpu=%d path=%s id=%d load=%lu util=%lu", __entry->cpu, + __get_str(path), __entry->id, __entry->load, __entry->util) +); +#endif /* CONFIG_SMP */ #endif /* _TRACE_SCHED_H */ /* This part must be outside protection */ diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 03adf9fb48b1..ac19ab6ced8f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2950,6 +2950,9 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa, sa->util_avg = sa->util_sum / LOAD_AVG_MAX; } + if (cfs_rq) + trace_sched_load_cfs_rq(cfs_rq); + return decayed; } @@ -3170,6 +3173,8 @@ static inline int propagate_entity_load_avg(struct sched_entity *se) update_tg_cfs_util(cfs_rq, se); update_tg_cfs_load(cfs_rq, se); + trace_sched_load_cfs_rq(cfs_rq); + return 1; } @@ -3359,6 +3364,8 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s set_tg_cfs_propagate(cfs_rq); cfs_rq_util_change(cfs_rq); + + trace_sched_load_cfs_rq(cfs_rq); } /** @@ -3379,6 +3386,8 @@ static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s set_tg_cfs_propagate(cfs_rq); cfs_rq_util_change(cfs_rq); + + trace_sched_load_cfs_rq(cfs_rq); } /* Add the load generated by se into cfs_rq's load average */ From patchwork Tue Mar 28 06:35:39 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dietmar Eggemann X-Patchwork-Id: 96105 Delivered-To: patch@linaro.org Received: by 10.140.89.233 with SMTP id v96csp1566374qgd; Mon, 27 Mar 2017 23:36:07 -0700 (PDT) X-Received: by 10.99.247.69 with SMTP id f5mr29101465pgk.63.1490682967113; Mon, 27 Mar 2017 23:36:07 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h90si3199659plb.118.2017.03.27.23.36.06; Mon, 27 Mar 2017 23:36:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754557AbdC1GgH (ORCPT + 19 others); Tue, 28 Mar 2017 02:36:07 -0400 Received: from foss.arm.com ([217.140.101.70]:43154 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754423AbdC1GgF (ORCPT ); Tue, 28 Mar 2017 02:36:05 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A1C551597; Mon, 27 Mar 2017 23:35:59 -0700 (PDT) Received: from e107985-lin.cambridge.arm.com (e107985-lin.cambridge.arm.com [10.1.210.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 364D33F59A; Mon, 27 Mar 2017 23:35:58 -0700 (PDT) From: Dietmar Eggemann To: Peter Zijlstra , Ingo Molnar Cc: LKML , Matt Fleming , Vincent Guittot , Steven Rostedt , Morten Rasmussen , Juri Lelli , Patrick Bellasi Subject: [RFC PATCH 3/5] sched/fair: Export group_cfs_rq() Date: Tue, 28 Mar 2017 07:35:39 +0100 Message-Id: <20170328063541.12912-4-dietmar.eggemann@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170328063541.12912-1-dietmar.eggemann@arm.com> References: <20170328063541.12912-1-dietmar.eggemann@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Export struct cfs_rq *group_cfs_rq(struct sched_entity *se) to be able to distinguish sched_entities representing either tasks or task_groups in the sched_entity related load tracking trace event provided by the next patch. Signed-off-by: Dietmar Eggemann Cc: Peter Zijlstra Cc: Ingo Molnar --- include/linux/sched.h | 10 ++++++++++ kernel/sched/fair.c | 12 ------------ 2 files changed, 10 insertions(+), 12 deletions(-) -- 2.11.0 diff --git a/include/linux/sched.h b/include/linux/sched.h index d67eee84fd43..8a35ff99140b 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -392,6 +392,16 @@ struct sched_entity { #endif }; +/* cfs_rq "owned" by this sched_entity */ +static inline struct cfs_rq *group_cfs_rq(struct sched_entity *se) +{ +#ifdef CONFIG_FAIR_GROUP_SCHED + return se->my_q; +#else + return NULL; +#endif +} + struct sched_rt_entity { struct list_head run_list; unsigned long timeout; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index ac19ab6ced8f..04d4f81b96ae 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -292,12 +292,6 @@ static inline struct cfs_rq *cfs_rq_of(struct sched_entity *se) return se->cfs_rq; } -/* runqueue "owned" by this group */ -static inline struct cfs_rq *group_cfs_rq(struct sched_entity *grp) -{ - return grp->my_q; -} - static inline void list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq) { if (!cfs_rq->on_list) { @@ -449,12 +443,6 @@ static inline struct cfs_rq *cfs_rq_of(struct sched_entity *se) return &rq->cfs; } -/* runqueue "owned" by this group */ -static inline struct cfs_rq *group_cfs_rq(struct sched_entity *grp) -{ - return NULL; -} - static inline void list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq) { } From patchwork Tue Mar 28 06:35:40 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dietmar Eggemann X-Patchwork-Id: 96106 Delivered-To: patch@linaro.org Received: by 10.140.89.233 with SMTP id v96csp1566532qgd; Mon, 27 Mar 2017 23:36:46 -0700 (PDT) X-Received: by 10.99.219.21 with SMTP id e21mr28395928pgg.70.1490683006809; Mon, 27 Mar 2017 23:36:46 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k21si3210982pgg.16.2017.03.27.23.36.46; Mon, 27 Mar 2017 23:36:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754574AbdC1GgJ (ORCPT + 19 others); Tue, 28 Mar 2017 02:36:09 -0400 Received: from foss.arm.com ([217.140.101.70]:43168 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754423AbdC1GgH (ORCPT ); Tue, 28 Mar 2017 02:36:07 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5A7D015A2; Mon, 27 Mar 2017 23:36:01 -0700 (PDT) Received: from e107985-lin.cambridge.arm.com (e107985-lin.cambridge.arm.com [10.1.210.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E48D63F59A; Mon, 27 Mar 2017 23:35:59 -0700 (PDT) From: Dietmar Eggemann To: Peter Zijlstra , Ingo Molnar Cc: LKML , Matt Fleming , Vincent Guittot , Steven Rostedt , Morten Rasmussen , Juri Lelli , Patrick Bellasi Subject: [RFC PATCH 4/5] sched/events: Introduce sched_entity load tracking trace event Date: Tue, 28 Mar 2017 07:35:40 +0100 Message-Id: <20170328063541.12912-5-dietmar.eggemann@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170328063541.12912-1-dietmar.eggemann@arm.com> References: <20170328063541.12912-1-dietmar.eggemann@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The trace event keys load and util (utilization) are mapped to: (1) load : se->avg.load_avg (2) util : se->avg.util_avg To let this trace event work for configurations w/ and w/o group scheduling support for cfs (CONFIG_FAIR_GROUP_SCHED) the following special handling is necessary for non-existent key=value pairs: path = "(null)" : In case of !CONFIG_FAIR_GROUP_SCHED or the sched_entity represents a task. id = -1 : In case of !CONFIG_FAIR_GROUP_SCHED or the sched_entity represents a task. comm = "(null)" : In case sched_entity represents a task_group. pid = -1 : In case sched_entity represents a task_group. The following list shows examples of the key=value pairs in different configurations for: (1) a task: cpu=0 path=(null) id=-1 comm=sshd pid=2206 load=102 util=102 (2) a taskgroup: cpu=1 path=/tg1/tg11/tg111 id=4 comm=(null) pid=-1 load=882 util=510 (3) an autogroup: cpu=0 path=/autogroup-13 id=0 comm=(null) pid=-1 load=49 util=48 (4) w/o CONFIG_FAIR_GROUP_SCHED: cpu=0 path=(null) id=-1 comm=sshd pid=2211 load=301 util=265 The trace event is only defined for CONFIG_SMP. The helper functions __trace_sched_cpu(), __trace_sched_path() and __trace_sched_id() are extended to deal with sched_entities as well. Signed-off-by: Dietmar Eggemann Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Steven Rostedt --- include/trace/events/sched.h | 63 +++++++++++++++++++++++++++++++++++--------- kernel/sched/fair.c | 3 +++ 2 files changed, 54 insertions(+), 12 deletions(-) -- 2.11.0 diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h index 51db8a90e45f..647cfaf528fd 100644 --- a/include/trace/events/sched.h +++ b/include/trace/events/sched.h @@ -566,14 +566,15 @@ TRACE_EVENT(sched_wake_idle_without_ipi, #ifdef CONFIG_SMP #ifdef CREATE_TRACE_POINTS static inline -int __trace_sched_cpu(struct cfs_rq *cfs_rq) +int __trace_sched_cpu(struct cfs_rq *cfs_rq, struct sched_entity *se) { #ifdef CONFIG_FAIR_GROUP_SCHED - struct rq *rq = cfs_rq->rq; + struct rq *rq = cfs_rq ? cfs_rq->rq : NULL; #else - struct rq *rq = container_of(cfs_rq, struct rq, cfs); + struct rq *rq = cfs_rq ? container_of(cfs_rq, struct rq, cfs) : NULL; #endif - return cpu_of(rq); + return rq ? cpu_of(rq) + : task_cpu((container_of(se, struct task_struct, se))); } static inline @@ -582,25 +583,24 @@ int __trace_sched_path(struct cfs_rq *cfs_rq, char *path, int len) #ifdef CONFIG_FAIR_GROUP_SCHED int l = path ? len : 0; - if (task_group_is_autogroup(cfs_rq->tg)) + if (cfs_rq && task_group_is_autogroup(cfs_rq->tg)) return autogroup_path(cfs_rq->tg, path, l) + 1; - else + else if (cfs_rq && cfs_rq->tg->css.cgroup) return cgroup_path(cfs_rq->tg->css.cgroup, path, l) + 1; -#else +#endif if (path) strcpy(path, "(null)"); return strlen("(null)"); -#endif } static inline int __trace_sched_id(struct cfs_rq *cfs_rq) { #ifdef CONFIG_FAIR_GROUP_SCHED - return cfs_rq->tg->css.id; -#else - return -1; + if (cfs_rq) + return cfs_rq->tg->css.id; #endif + return -1; } #endif /* CREATE_TRACE_POINTS */ @@ -623,7 +623,7 @@ TRACE_EVENT(sched_load_cfs_rq, ), TP_fast_assign( - __entry->cpu = __trace_sched_cpu(cfs_rq); + __entry->cpu = __trace_sched_cpu(cfs_rq, NULL); __trace_sched_path(cfs_rq, __get_dynamic_array(path), __get_dynamic_array_len(path)); __entry->id = __trace_sched_id(cfs_rq); @@ -634,6 +634,45 @@ TRACE_EVENT(sched_load_cfs_rq, TP_printk("cpu=%d path=%s id=%d load=%lu util=%lu", __entry->cpu, __get_str(path), __entry->id, __entry->load, __entry->util) ); + +/* + * Tracepoint for sched_entity load tracking: + */ +TRACE_EVENT(sched_load_se, + + TP_PROTO(struct sched_entity *se), + + TP_ARGS(se), + + TP_STRUCT__entry( + __field( int, cpu ) + __dynamic_array(char, path, + __trace_sched_path(group_cfs_rq(se), NULL, 0) ) + __field( int, id ) + __array( char, comm, TASK_COMM_LEN ) + __field( pid_t, pid ) + __field( unsigned long, load ) + __field( unsigned long, util ) + ), + + TP_fast_assign( + struct task_struct *p = group_cfs_rq(se) ? NULL + : container_of(se, struct task_struct, se); + + __entry->cpu = __trace_sched_cpu(group_cfs_rq(se), se); + __trace_sched_path(group_cfs_rq(se), __get_dynamic_array(path), + __get_dynamic_array_len(path)); + __entry->id = __trace_sched_id(group_cfs_rq(se)); + memcpy(__entry->comm, p ? p->comm : "(null)", TASK_COMM_LEN); + __entry->pid = p ? p->pid : -1; + __entry->load = se->avg.load_avg; + __entry->util = se->avg.util_avg; + ), + + TP_printk("cpu=%d path=%s id=%d comm=%s pid=%d load=%lu util=%lu", + __entry->cpu, __get_str(path), __entry->id, __entry->comm, + __entry->pid, __entry->load, __entry->util) +); #endif /* CONFIG_SMP */ #endif /* _TRACE_SCHED_H */ diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 04d4f81b96ae..d1dcb19f5b55 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2940,6 +2940,8 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa, if (cfs_rq) trace_sched_load_cfs_rq(cfs_rq); + else + trace_sched_load_se(container_of(sa, struct sched_entity, avg)); return decayed; } @@ -3162,6 +3164,7 @@ static inline int propagate_entity_load_avg(struct sched_entity *se) update_tg_cfs_load(cfs_rq, se); trace_sched_load_cfs_rq(cfs_rq); + trace_sched_load_se(se); return 1; } From patchwork Tue Mar 28 06:35:41 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dietmar Eggemann X-Patchwork-Id: 96108 Delivered-To: patch@linaro.org Received: by 10.140.89.233 with SMTP id v96csp1566612qgd; Mon, 27 Mar 2017 23:37:03 -0700 (PDT) X-Received: by 10.98.82.216 with SMTP id g207mr29427381pfb.139.1490683023214; Mon, 27 Mar 2017 23:37:03 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h6si3189358plk.222.2017.03.27.23.37.02; Mon, 27 Mar 2017 23:37:03 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754591AbdC1Ggj (ORCPT + 19 others); Tue, 28 Mar 2017 02:36:39 -0400 Received: from foss.arm.com ([217.140.101.70]:43178 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754477AbdC1Ggh (ORCPT ); Tue, 28 Mar 2017 02:36:37 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0AA9115AB; Mon, 27 Mar 2017 23:36:03 -0700 (PDT) Received: from e107985-lin.cambridge.arm.com (e107985-lin.cambridge.arm.com [10.1.210.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 94D673F59A; Mon, 27 Mar 2017 23:36:01 -0700 (PDT) From: Dietmar Eggemann To: Peter Zijlstra , Ingo Molnar Cc: LKML , Matt Fleming , Vincent Guittot , Steven Rostedt , Morten Rasmussen , Juri Lelli , Patrick Bellasi Subject: [RFC PATCH 5/5] sched/events: Introduce task_group load tracking trace event Date: Tue, 28 Mar 2017 07:35:41 +0100 Message-Id: <20170328063541.12912-6-dietmar.eggemann@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170328063541.12912-1-dietmar.eggemann@arm.com> References: <20170328063541.12912-1-dietmar.eggemann@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The trace event key load is mapped to: (1) load : cfs_rq->tg->load_avg The cfs_rq owned by the task_group is used as the only parameter for the trace event because it has a reference to the taskgroup and the cpu. Using the taskgroup as a parameter instead would require the cpu as a second parameter. A task_group is global and not per-cpu data. The cpu key only tells on which cpu the value was gathered. The following list shows examples of the key=value pairs for: (1) a task group: cpu=1 path=/tg1/tg11/tg111 id=4 load=517 (2) an autogroup: cpu=1 path=/autogroup-10 id=0 load=1050 We don't maintain a load signal for a root task group. The trace event is only defined if cfs group scheduling support (CONFIG_FAIR_GROUP_SCHED) is enabled. Signed-off-by: Dietmar Eggemann Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Steven Rostedt --- include/trace/events/sched.h | 31 +++++++++++++++++++++++++++++++ kernel/sched/fair.c | 2 ++ 2 files changed, 33 insertions(+) -- 2.11.0 diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h index 647cfaf528fd..3fe0092176f8 100644 --- a/include/trace/events/sched.h +++ b/include/trace/events/sched.h @@ -673,6 +673,37 @@ TRACE_EVENT(sched_load_se, __entry->cpu, __get_str(path), __entry->id, __entry->comm, __entry->pid, __entry->load, __entry->util) ); + +/* + * Tracepoint for task_group load tracking: + */ +#ifdef CONFIG_FAIR_GROUP_SCHED +TRACE_EVENT(sched_load_tg, + + TP_PROTO(struct cfs_rq *cfs_rq), + + TP_ARGS(cfs_rq), + + TP_STRUCT__entry( + __field( int, cpu ) + __dynamic_array(char, path, + __trace_sched_path(cfs_rq, NULL, 0) ) + __field( int, id ) + __field( long, load ) + ), + + TP_fast_assign( + __entry->cpu = cfs_rq->rq->cpu; + __trace_sched_path(cfs_rq, __get_dynamic_array(path), + __get_dynamic_array_len(path)); + __entry->id = cfs_rq->tg->css.id; + __entry->load = atomic_long_read(&cfs_rq->tg->load_avg); + ), + + TP_printk("cpu=%d path=%s id=%d load=%ld", __entry->cpu, + __get_str(path), __entry->id, __entry->load) +); +#endif /* CONFIG_FAIR_GROUP_SCHED */ #endif /* CONFIG_SMP */ #endif /* _TRACE_SCHED_H */ diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index d1dcb19f5b55..dbe2d5ef8b9e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2997,6 +2997,8 @@ static inline void update_tg_load_avg(struct cfs_rq *cfs_rq, int force) atomic_long_add(delta, &cfs_rq->tg->load_avg); cfs_rq->tg_load_avg_contrib = cfs_rq->avg.load_avg; } + + trace_sched_load_tg(cfs_rq); } /*