From patchwork Fri May 25 13:12:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 136867 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp3575043lji; Fri, 25 May 2018 06:14:19 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpyDkKPqh2EW3ySOgm9J5kgrUfKbYST3msLuN3OoekWhegaVK8iSiplQXNT/FXPP01ISExP X-Received: by 2002:a17:902:bb93:: with SMTP id m19-v6mr2590734pls.74.1527254059132; Fri, 25 May 2018 06:14:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527254059; cv=none; d=google.com; s=arc-20160816; b=KtxCUjiEmppn1yjjF6UEoWJYrIinfhBSpMjO5qA3tMBIk9v4re+DKVxvoKR0aVlItQ vWZm/RLkKWyE9hEvzqHDwrIBo3sA/8fU2P0nmO0rzkUM32LkLLAQIiLLxpH5T12tNsVB DLFrKDrmnlVhruT9Bw2zVv1vR1tVd4eH46703XCqbpjDpORhfH6mStgFSCAHyrB3d9Lp 7XOkyBWmLAYn8d31uDEcyjXEkE73Lhj43I6o5+FBHtU2sXCx7bS7PzWng46p2XLBWU3g Qi+3vKL2pVjW5ozo/HpuQWhjivXKhFSfaSDzYCSrBPb/iWCo+0PYh5DMNs69W0co3c7W aAIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=H3PrrBEAcLOAbSRc0sxgrSbzxheJnsotj6lr9KzdSfM=; b=x+o4JC1qZh3pAq5WMHvCxNuOTbRTQ3IxCR2LzunuZC9YS/w8l+RJ97JbbG76g5lzCI A7EYR6jKXGn5cp/yEPQB5+bwo8J/m960ZQLCwP4tGQshVySu1mH7TwiUb7MeyAxYVbhM ZdpQ3Hjas2RQ2qXL7oRQODC7dxqeZzvLGfY5T7xDDdcoGb/znqa6xSVtKDOuzj1ggjAY V43jNt5LKi4auB+RpvBFHUrZORCOeNMhs+32dINnTg8A+PrWUWV0Q5Hq2qnJP8Di1DKL rmdWht7Yby2frbtPyGA+mjy3AeHAf8+zJGdEg2rvxDlkq/gCrgfuICvHjzKA97SZzta2 4Zhg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=NKK7FR57; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t1-v6si6931527plq.341.2018.05.25.06.14.18; Fri, 25 May 2018 06:14:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=NKK7FR57; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935438AbeEYNMr (ORCPT + 30 others); Fri, 25 May 2018 09:12:47 -0400 Received: from mail-wm0-f65.google.com ([74.125.82.65]:37103 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752581AbeEYNMm (ORCPT ); Fri, 25 May 2018 09:12:42 -0400 Received: by mail-wm0-f65.google.com with SMTP id l1-v6so14538997wmb.2 for ; Fri, 25 May 2018 06:12:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=H3PrrBEAcLOAbSRc0sxgrSbzxheJnsotj6lr9KzdSfM=; b=NKK7FR57IyBB0ePCDZpHSbUxTM341QxWS7niCb9MzqqFefH95Qq5D3G93v8T2wJXGm g/ZkS82tq0ffzR0r9uHIDDLwOCRWDcyANnbkDL7MxHvDIUuGyYxNSiNGRvpKX1HhTsdw 2wwhEu+D6EmTr3kjFDc9eTBEPFQ2VHjjmgC9I= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=H3PrrBEAcLOAbSRc0sxgrSbzxheJnsotj6lr9KzdSfM=; b=nWqUlWy+DfIT+XqO7JiaN+rTCj76M8kZx4urAnyCNDWljz+Zxo9s6Pe2hGFeSE73mx urcgjl+M0eHvSWFvgUBc0RI4VZMijNZ7AU6CthVSfxWoME8jvBk0qztOTMpKDMv62GlV btZ0ddzIahIzF8B+wZ8wqQuJq3PcMVpYDEkckJD0+qWHp2KkDztxrfPlb5BETq87YyMb DqKLetyVU8NJmSjRcBNxNSS0mJIrzo4v2ssLyChF8DuIcHzzO4vDy+HT7g23GNHTHlVp lMWnfBy5AaZcAYwM3ojmYtm8FdcObqvD4Q7lF6NjCfMp0BGiBnjCQ/cgKe5jJ3o7ZqIP IPkw== X-Gm-Message-State: ALKqPwcnzFtTsDRS4SsFkb/CyjnVB36qsphEOshp6IhIjqLWXOi/2djx /3q/ApQSl9vkYBkuh6r8UHoblQ== X-Received: by 2002:a1c:1103:: with SMTP id 3-v6mr1688437wmr.26.1527253960608; Fri, 25 May 2018 06:12:40 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:a860:64b4:335b:c763]) by smtp.gmail.com with ESMTPSA id 4-v6sm9690948wmg.8.2018.05.25.06.12.38 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 25 May 2018 06:12:38 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, rjw@rjwysocki.net Cc: juri.lelli@redhat.com, dietmar.eggemann@arm.com, Morten.Rasmussen@arm.com, viresh.kumar@linaro.org, valentin.schneider@arm.com, quentin.perret@arm.com, Vincent Guittot Subject: [PATCH v5 01/10] sched/pelt: Move pelt related code in a dedicated file Date: Fri, 25 May 2018 15:12:22 +0200 Message-Id: <1527253951-22709-2-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> References: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We want to track rt_rq's utilization as a part of the estimation of the whole rq's utilization. This is necessary because rt tasks can steal utilization to cfs tasks and make them lighter than they are. As we want to use the same load tracking mecanism for both and prevent useless dependency between cfs and rt code, pelt code is moved in a dedicated file. Signed-off-by: Vincent Guittot --- kernel/sched/Makefile | 2 +- kernel/sched/fair.c | 333 +------------------------------------------------- kernel/sched/pelt.c | 311 ++++++++++++++++++++++++++++++++++++++++++++++ kernel/sched/pelt.h | 43 +++++++ kernel/sched/sched.h | 19 +++ 5 files changed, 375 insertions(+), 333 deletions(-) create mode 100644 kernel/sched/pelt.c create mode 100644 kernel/sched/pelt.h -- 2.7.4 diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile index d9a02b3..7fe1834 100644 --- a/kernel/sched/Makefile +++ b/kernel/sched/Makefile @@ -20,7 +20,7 @@ obj-y += core.o loadavg.o clock.o cputime.o obj-y += idle.o fair.o rt.o deadline.o obj-y += wait.o wait_bit.o swait.o completion.o -obj-$(CONFIG_SMP) += cpupri.o cpudeadline.o topology.o stop_task.o +obj-$(CONFIG_SMP) += cpupri.o cpudeadline.o topology.o stop_task.o pelt.o obj-$(CONFIG_SCHED_AUTOGROUP) += autogroup.o obj-$(CONFIG_SCHEDSTATS) += stats.o obj-$(CONFIG_SCHED_DEBUG) += debug.o diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e497c05..6390c66 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -255,9 +255,6 @@ static inline struct rq *rq_of(struct cfs_rq *cfs_rq) return cfs_rq->rq; } -/* An entity is a task if it doesn't "own" a runqueue */ -#define entity_is_task(se) (!se->my_q) - static inline struct task_struct *task_of(struct sched_entity *se) { SCHED_WARN_ON(!entity_is_task(se)); @@ -419,7 +416,6 @@ static inline struct rq *rq_of(struct cfs_rq *cfs_rq) return container_of(cfs_rq, struct rq, cfs); } -#define entity_is_task(se) 1 #define for_each_sched_entity(se) \ for (; se; se = NULL) @@ -692,7 +688,7 @@ static u64 sched_vslice(struct cfs_rq *cfs_rq, struct sched_entity *se) } #ifdef CONFIG_SMP - +#include "pelt.h" #include "sched-pelt.h" static int select_idle_sibling(struct task_struct *p, int prev_cpu, int cpu); @@ -2749,19 +2745,6 @@ account_entity_dequeue(struct cfs_rq *cfs_rq, struct sched_entity *se) } while (0) #ifdef CONFIG_SMP -/* - * XXX we want to get rid of these helpers and use the full load resolution. - */ -static inline long se_weight(struct sched_entity *se) -{ - return scale_load_down(se->load.weight); -} - -static inline long se_runnable(struct sched_entity *se) -{ - return scale_load_down(se->runnable_weight); -} - static inline void enqueue_runnable_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { @@ -3062,314 +3045,6 @@ static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq, int flags) } #ifdef CONFIG_SMP -/* - * Approximate: - * val * y^n, where y^32 ~= 0.5 (~1 scheduling period) - */ -static u64 decay_load(u64 val, u64 n) -{ - unsigned int local_n; - - if (unlikely(n > LOAD_AVG_PERIOD * 63)) - return 0; - - /* after bounds checking we can collapse to 32-bit */ - local_n = n; - - /* - * As y^PERIOD = 1/2, we can combine - * y^n = 1/2^(n/PERIOD) * y^(n%PERIOD) - * With a look-up table which covers y^n (n= LOAD_AVG_PERIOD)) { - val >>= local_n / LOAD_AVG_PERIOD; - local_n %= LOAD_AVG_PERIOD; - } - - val = mul_u64_u32_shr(val, runnable_avg_yN_inv[local_n], 32); - return val; -} - -static u32 __accumulate_pelt_segments(u64 periods, u32 d1, u32 d3) -{ - u32 c1, c2, c3 = d3; /* y^0 == 1 */ - - /* - * c1 = d1 y^p - */ - c1 = decay_load((u64)d1, periods); - - /* - * p-1 - * c2 = 1024 \Sum y^n - * n=1 - * - * inf inf - * = 1024 ( \Sum y^n - \Sum y^n - y^0 ) - * n=0 n=p - */ - c2 = LOAD_AVG_MAX - decay_load(LOAD_AVG_MAX, periods) - 1024; - - return c1 + c2 + c3; -} - -/* - * Accumulate the three separate parts of the sum; d1 the remainder - * of the last (incomplete) period, d2 the span of full periods and d3 - * the remainder of the (incomplete) current period. - * - * d1 d2 d3 - * ^ ^ ^ - * | | | - * |<->|<----------------->|<--->| - * ... |---x---|------| ... |------|-----x (now) - * - * p-1 - * u' = (u + d1) y^p + 1024 \Sum y^n + d3 y^0 - * n=1 - * - * = u y^p + (Step 1) - * - * p-1 - * d1 y^p + 1024 \Sum y^n + d3 y^0 (Step 2) - * n=1 - */ -static __always_inline u32 -accumulate_sum(u64 delta, int cpu, struct sched_avg *sa, - unsigned long load, unsigned long runnable, int running) -{ - unsigned long scale_freq, scale_cpu; - u32 contrib = (u32)delta; /* p == 0 -> delta < 1024 */ - u64 periods; - - scale_freq = arch_scale_freq_capacity(cpu); - scale_cpu = arch_scale_cpu_capacity(NULL, cpu); - - delta += sa->period_contrib; - periods = delta / 1024; /* A period is 1024us (~1ms) */ - - /* - * Step 1: decay old *_sum if we crossed period boundaries. - */ - if (periods) { - sa->load_sum = decay_load(sa->load_sum, periods); - sa->runnable_load_sum = - decay_load(sa->runnable_load_sum, periods); - sa->util_sum = decay_load((u64)(sa->util_sum), periods); - - /* - * Step 2 - */ - delta %= 1024; - contrib = __accumulate_pelt_segments(periods, - 1024 - sa->period_contrib, delta); - } - sa->period_contrib = delta; - - contrib = cap_scale(contrib, scale_freq); - if (load) - sa->load_sum += load * contrib; - if (runnable) - sa->runnable_load_sum += runnable * contrib; - if (running) - sa->util_sum += contrib * scale_cpu; - - return periods; -} - -/* - * We can represent the historical contribution to runnable average as the - * coefficients of a geometric series. To do this we sub-divide our runnable - * history into segments of approximately 1ms (1024us); label the segment that - * occurred N-ms ago p_N, with p_0 corresponding to the current period, e.g. - * - * [<- 1024us ->|<- 1024us ->|<- 1024us ->| ... - * p0 p1 p2 - * (now) (~1ms ago) (~2ms ago) - * - * Let u_i denote the fraction of p_i that the entity was runnable. - * - * We then designate the fractions u_i as our co-efficients, yielding the - * following representation of historical load: - * u_0 + u_1*y + u_2*y^2 + u_3*y^3 + ... - * - * We choose y based on the with of a reasonably scheduling period, fixing: - * y^32 = 0.5 - * - * This means that the contribution to load ~32ms ago (u_32) will be weighted - * approximately half as much as the contribution to load within the last ms - * (u_0). - * - * When a period "rolls over" and we have new u_0`, multiplying the previous - * sum again by y is sufficient to update: - * load_avg = u_0` + y*(u_0 + u_1*y + u_2*y^2 + ... ) - * = u_0 + u_1*y + u_2*y^2 + ... [re-labeling u_i --> u_{i+1}] - */ -static __always_inline int -___update_load_sum(u64 now, int cpu, struct sched_avg *sa, - unsigned long load, unsigned long runnable, int running) -{ - u64 delta; - - delta = now - sa->last_update_time; - /* - * This should only happen when time goes backwards, which it - * unfortunately does during sched clock init when we swap over to TSC. - */ - if ((s64)delta < 0) { - sa->last_update_time = now; - return 0; - } - - /* - * Use 1024ns as the unit of measurement since it's a reasonable - * approximation of 1us and fast to compute. - */ - delta >>= 10; - if (!delta) - return 0; - - sa->last_update_time += delta << 10; - - /* - * running is a subset of runnable (weight) so running can't be set if - * runnable is clear. But there are some corner cases where the current - * se has been already dequeued but cfs_rq->curr still points to it. - * This means that weight will be 0 but not running for a sched_entity - * but also for a cfs_rq if the latter becomes idle. As an example, - * this happens during idle_balance() which calls - * update_blocked_averages() - */ - if (!load) - runnable = running = 0; - - /* - * Now we know we crossed measurement unit boundaries. The *_avg - * accrues by two steps: - * - * Step 1: accumulate *_sum since last_update_time. If we haven't - * crossed period boundaries, finish. - */ - if (!accumulate_sum(delta, cpu, sa, load, runnable, running)) - return 0; - - return 1; -} - -static __always_inline void -___update_load_avg(struct sched_avg *sa, unsigned long load, unsigned long runnable) -{ - u32 divider = LOAD_AVG_MAX - 1024 + sa->period_contrib; - - /* - * Step 2: update *_avg. - */ - sa->load_avg = div_u64(load * sa->load_sum, divider); - sa->runnable_load_avg = div_u64(runnable * sa->runnable_load_sum, divider); - sa->util_avg = sa->util_sum / divider; -} - -/* - * When a task is dequeued, its estimated utilization should not be update if - * its util_avg has not been updated at least once. - * This flag is used to synchronize util_avg updates with util_est updates. - * We map this information into the LSB bit of the utilization saved at - * dequeue time (i.e. util_est.dequeued). - */ -#define UTIL_AVG_UNCHANGED 0x1 - -static inline void cfs_se_util_change(struct sched_avg *avg) -{ - unsigned int enqueued; - - if (!sched_feat(UTIL_EST)) - return; - - /* Avoid store if the flag has been already set */ - enqueued = avg->util_est.enqueued; - if (!(enqueued & UTIL_AVG_UNCHANGED)) - return; - - /* Reset flag to report util_avg has been updated */ - enqueued &= ~UTIL_AVG_UNCHANGED; - WRITE_ONCE(avg->util_est.enqueued, enqueued); -} - -/* - * sched_entity: - * - * task: - * se_runnable() == se_weight() - * - * group: [ see update_cfs_group() ] - * se_weight() = tg->weight * grq->load_avg / tg->load_avg - * se_runnable() = se_weight(se) * grq->runnable_load_avg / grq->load_avg - * - * load_sum := runnable_sum - * load_avg = se_weight(se) * runnable_avg - * - * runnable_load_sum := runnable_sum - * runnable_load_avg = se_runnable(se) * runnable_avg - * - * XXX collapse load_sum and runnable_load_sum - * - * cfq_rs: - * - * load_sum = \Sum se_weight(se) * se->avg.load_sum - * load_avg = \Sum se->avg.load_avg - * - * runnable_load_sum = \Sum se_runnable(se) * se->avg.runnable_load_sum - * runnable_load_avg = \Sum se->avg.runable_load_avg - */ - -static int -__update_load_avg_blocked_se(u64 now, int cpu, struct sched_entity *se) -{ - if (entity_is_task(se)) - se->runnable_weight = se->load.weight; - - if (___update_load_sum(now, cpu, &se->avg, 0, 0, 0)) { - ___update_load_avg(&se->avg, se_weight(se), se_runnable(se)); - return 1; - } - - return 0; -} - -static int -__update_load_avg_se(u64 now, int cpu, struct cfs_rq *cfs_rq, struct sched_entity *se) -{ - if (entity_is_task(se)) - se->runnable_weight = se->load.weight; - - if (___update_load_sum(now, cpu, &se->avg, !!se->on_rq, !!se->on_rq, - cfs_rq->curr == se)) { - - ___update_load_avg(&se->avg, se_weight(se), se_runnable(se)); - cfs_se_util_change(&se->avg); - return 1; - } - - return 0; -} - -static int -__update_load_avg_cfs_rq(u64 now, int cpu, struct cfs_rq *cfs_rq) -{ - if (___update_load_sum(now, cpu, &cfs_rq->avg, - scale_load_down(cfs_rq->load.weight), - scale_load_down(cfs_rq->runnable_weight), - cfs_rq->curr != NULL)) { - - ___update_load_avg(&cfs_rq->avg, 1, 1); - return 1; - } - - return 0; -} - #ifdef CONFIG_FAIR_GROUP_SCHED /** * update_tg_load_avg - update the tg's load avg @@ -4045,12 +3720,6 @@ util_est_dequeue(struct cfs_rq *cfs_rq, struct task_struct *p, bool task_sleep) #else /* CONFIG_SMP */ -static inline int -update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) -{ - return 0; -} - #define UPDATE_TG 0x0 #define SKIP_AGE_LOAD 0x0 #define DO_ATTACH 0x0 diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c new file mode 100644 index 0000000..e6ecbb2 --- /dev/null +++ b/kernel/sched/pelt.c @@ -0,0 +1,311 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Per Entity Load Tracking + * + * Copyright (C) 2007 Red Hat, Inc., Ingo Molnar + * + * Interactivity improvements by Mike Galbraith + * (C) 2007 Mike Galbraith + * + * Various enhancements by Dmitry Adamushko. + * (C) 2007 Dmitry Adamushko + * + * Group scheduling enhancements by Srivatsa Vaddagiri + * Copyright IBM Corporation, 2007 + * Author: Srivatsa Vaddagiri + * + * Scaled math optimizations by Thomas Gleixner + * Copyright (C) 2007, Thomas Gleixner + * + * Adaptive scheduling granularity, math enhancements by Peter Zijlstra + * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra + * + * Move PELT related code from fair.c into this pelt.c file + * Author: Vincent Guittot + */ + +#include +#include "sched.h" +#include "sched-pelt.h" +#include "pelt.h" + +/* + * Approximate: + * val * y^n, where y^32 ~= 0.5 (~1 scheduling period) + */ +static u64 decay_load(u64 val, u64 n) +{ + unsigned int local_n; + + if (unlikely(n > LOAD_AVG_PERIOD * 63)) + return 0; + + /* after bounds checking we can collapse to 32-bit */ + local_n = n; + + /* + * As y^PERIOD = 1/2, we can combine + * y^n = 1/2^(n/PERIOD) * y^(n%PERIOD) + * With a look-up table which covers y^n (n= LOAD_AVG_PERIOD)) { + val >>= local_n / LOAD_AVG_PERIOD; + local_n %= LOAD_AVG_PERIOD; + } + + val = mul_u64_u32_shr(val, runnable_avg_yN_inv[local_n], 32); + return val; +} + +static u32 __accumulate_pelt_segments(u64 periods, u32 d1, u32 d3) +{ + u32 c1, c2, c3 = d3; /* y^0 == 1 */ + + /* + * c1 = d1 y^p + */ + c1 = decay_load((u64)d1, periods); + + /* + * p-1 + * c2 = 1024 \Sum y^n + * n=1 + * + * inf inf + * = 1024 ( \Sum y^n - \Sum y^n - y^0 ) + * n=0 n=p + */ + c2 = LOAD_AVG_MAX - decay_load(LOAD_AVG_MAX, periods) - 1024; + + return c1 + c2 + c3; +} + +#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT) + +/* + * Accumulate the three separate parts of the sum; d1 the remainder + * of the last (incomplete) period, d2 the span of full periods and d3 + * the remainder of the (incomplete) current period. + * + * d1 d2 d3 + * ^ ^ ^ + * | | | + * |<->|<----------------->|<--->| + * ... |---x---|------| ... |------|-----x (now) + * + * p-1 + * u' = (u + d1) y^p + 1024 \Sum y^n + d3 y^0 + * n=1 + * + * = u y^p + (Step 1) + * + * p-1 + * d1 y^p + 1024 \Sum y^n + d3 y^0 (Step 2) + * n=1 + */ +static __always_inline u32 +accumulate_sum(u64 delta, int cpu, struct sched_avg *sa, + unsigned long load, unsigned long runnable, int running) +{ + unsigned long scale_freq, scale_cpu; + u32 contrib = (u32)delta; /* p == 0 -> delta < 1024 */ + u64 periods; + + scale_freq = arch_scale_freq_capacity(cpu); + scale_cpu = arch_scale_cpu_capacity(NULL, cpu); + + delta += sa->period_contrib; + periods = delta / 1024; /* A period is 1024us (~1ms) */ + + /* + * Step 1: decay old *_sum if we crossed period boundaries. + */ + if (periods) { + sa->load_sum = decay_load(sa->load_sum, periods); + sa->runnable_load_sum = + decay_load(sa->runnable_load_sum, periods); + sa->util_sum = decay_load((u64)(sa->util_sum), periods); + + /* + * Step 2 + */ + delta %= 1024; + contrib = __accumulate_pelt_segments(periods, + 1024 - sa->period_contrib, delta); + } + sa->period_contrib = delta; + + contrib = cap_scale(contrib, scale_freq); + if (load) + sa->load_sum += load * contrib; + if (runnable) + sa->runnable_load_sum += runnable * contrib; + if (running) + sa->util_sum += contrib * scale_cpu; + + return periods; +} + +/* + * We can represent the historical contribution to runnable average as the + * coefficients of a geometric series. To do this we sub-divide our runnable + * history into segments of approximately 1ms (1024us); label the segment that + * occurred N-ms ago p_N, with p_0 corresponding to the current period, e.g. + * + * [<- 1024us ->|<- 1024us ->|<- 1024us ->| ... + * p0 p1 p2 + * (now) (~1ms ago) (~2ms ago) + * + * Let u_i denote the fraction of p_i that the entity was runnable. + * + * We then designate the fractions u_i as our co-efficients, yielding the + * following representation of historical load: + * u_0 + u_1*y + u_2*y^2 + u_3*y^3 + ... + * + * We choose y based on the with of a reasonably scheduling period, fixing: + * y^32 = 0.5 + * + * This means that the contribution to load ~32ms ago (u_32) will be weighted + * approximately half as much as the contribution to load within the last ms + * (u_0). + * + * When a period "rolls over" and we have new u_0`, multiplying the previous + * sum again by y is sufficient to update: + * load_avg = u_0` + y*(u_0 + u_1*y + u_2*y^2 + ... ) + * = u_0 + u_1*y + u_2*y^2 + ... [re-labeling u_i --> u_{i+1}] + */ +static __always_inline int +___update_load_sum(u64 now, int cpu, struct sched_avg *sa, + unsigned long load, unsigned long runnable, int running) +{ + u64 delta; + + delta = now - sa->last_update_time; + /* + * This should only happen when time goes backwards, which it + * unfortunately does during sched clock init when we swap over to TSC. + */ + if ((s64)delta < 0) { + sa->last_update_time = now; + return 0; + } + + /* + * Use 1024ns as the unit of measurement since it's a reasonable + * approximation of 1us and fast to compute. + */ + delta >>= 10; + if (!delta) + return 0; + + sa->last_update_time += delta << 10; + + /* + * running is a subset of runnable (weight) so running can't be set if + * runnable is clear. But there are some corner cases where the current + * se has been already dequeued but cfs_rq->curr still points to it. + * This means that weight will be 0 but not running for a sched_entity + * but also for a cfs_rq if the latter becomes idle. As an example, + * this happens during idle_balance() which calls + * update_blocked_averages() + */ + if (!load) + runnable = running = 0; + + /* + * Now we know we crossed measurement unit boundaries. The *_avg + * accrues by two steps: + * + * Step 1: accumulate *_sum since last_update_time. If we haven't + * crossed period boundaries, finish. + */ + if (!accumulate_sum(delta, cpu, sa, load, runnable, running)) + return 0; + + return 1; +} + +static __always_inline void +___update_load_avg(struct sched_avg *sa, unsigned long load, unsigned long runnable) +{ + u32 divider = LOAD_AVG_MAX - 1024 + sa->period_contrib; + + /* + * Step 2: update *_avg. + */ + sa->load_avg = div_u64(load * sa->load_sum, divider); + sa->runnable_load_avg = div_u64(runnable * sa->runnable_load_sum, divider); + sa->util_avg = sa->util_sum / divider; +} + +/* + * sched_entity: + * + * task: + * se_runnable() == se_weight() + * + * group: [ see update_cfs_group() ] + * se_weight() = tg->weight * grq->load_avg / tg->load_avg + * se_runnable() = se_weight(se) * grq->runnable_load_avg / grq->load_avg + * + * load_sum := runnable_sum + * load_avg = se_weight(se) * runnable_avg + * + * runnable_load_sum := runnable_sum + * runnable_load_avg = se_runnable(se) * runnable_avg + * + * XXX collapse load_sum and runnable_load_sum + * + * cfq_rq: + * + * load_sum = \Sum se_weight(se) * se->avg.load_sum + * load_avg = \Sum se->avg.load_avg + * + * runnable_load_sum = \Sum se_runnable(se) * se->avg.runnable_load_sum + * runnable_load_avg = \Sum se->avg.runable_load_avg + */ + +int __update_load_avg_blocked_se(u64 now, int cpu, struct sched_entity *se) +{ + if (entity_is_task(se)) + se->runnable_weight = se->load.weight; + + if (___update_load_sum(now, cpu, &se->avg, 0, 0, 0)) { + ___update_load_avg(&se->avg, se_weight(se), se_runnable(se)); + return 1; + } + + return 0; +} + +int __update_load_avg_se(u64 now, int cpu, struct cfs_rq *cfs_rq, struct sched_entity *se) +{ + if (entity_is_task(se)) + se->runnable_weight = se->load.weight; + + if (___update_load_sum(now, cpu, &se->avg, !!se->on_rq, !!se->on_rq, + cfs_rq->curr == se)) { + + ___update_load_avg(&se->avg, se_weight(se), se_runnable(se)); + cfs_se_util_change(&se->avg); + return 1; + } + + return 0; +} + +int __update_load_avg_cfs_rq(u64 now, int cpu, struct cfs_rq *cfs_rq) +{ + if (___update_load_sum(now, cpu, &cfs_rq->avg, + scale_load_down(cfs_rq->load.weight), + scale_load_down(cfs_rq->runnable_weight), + cfs_rq->curr != NULL)) { + + ___update_load_avg(&cfs_rq->avg, 1, 1); + return 1; + } + + return 0; +} diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h new file mode 100644 index 0000000..9cac73e --- /dev/null +++ b/kernel/sched/pelt.h @@ -0,0 +1,43 @@ +#ifdef CONFIG_SMP + +int __update_load_avg_blocked_se(u64 now, int cpu, struct sched_entity *se); +int __update_load_avg_se(u64 now, int cpu, struct cfs_rq *cfs_rq, struct sched_entity *se); +int __update_load_avg_cfs_rq(u64 now, int cpu, struct cfs_rq *cfs_rq); + +/* + * When a task is dequeued, its estimated utilization should not be update if + * its util_avg has not been updated at least once. + * This flag is used to synchronize util_avg updates with util_est updates. + * We map this information into the LSB bit of the utilization saved at + * dequeue time (i.e. util_est.dequeued). + */ +#define UTIL_AVG_UNCHANGED 0x1 + +static inline void cfs_se_util_change(struct sched_avg *avg) +{ + unsigned int enqueued; + + if (!sched_feat(UTIL_EST)) + return; + + /* Avoid store if the flag has been already set */ + enqueued = avg->util_est.enqueued; + if (!(enqueued & UTIL_AVG_UNCHANGED)) + return; + + /* Reset flag to report util_avg has been updated */ + enqueued &= ~UTIL_AVG_UNCHANGED; + WRITE_ONCE(avg->util_est.enqueued, enqueued); +} + +#else + +static inline int +update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) +{ + return 0; +} + +#endif + + diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 67702b4..757a3ee 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -666,7 +666,26 @@ struct dl_rq { u64 bw_ratio; }; +#ifdef CONFIG_FAIR_GROUP_SCHED +/* An entity is a task if it doesn't "own" a runqueue */ +#define entity_is_task(se) (!se->my_q) +#else +#define entity_is_task(se) 1 +#endif + #ifdef CONFIG_SMP +/* + * XXX we want to get rid of these helpers and use the full load resolution. + */ +static inline long se_weight(struct sched_entity *se) +{ + return scale_load_down(se->load.weight); +} + +static inline long se_runnable(struct sched_entity *se) +{ + return scale_load_down(se->runnable_weight); +} static inline bool sched_asym_prefer(int a, int b) { From patchwork Fri May 25 13:12:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 136864 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp3574430lji; Fri, 25 May 2018 06:13:44 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqAk4SDdG4eUdu8k6DoFp/+mqOFjcB21LpGzb9AYVBC3y2S9194cBnDctvcgkD9uY2SRPAR X-Received: by 2002:a62:c95c:: with SMTP id k89-v6mr2536488pfg.47.1527254023933; Fri, 25 May 2018 06:13:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527254023; cv=none; d=google.com; s=arc-20160816; b=thCbOLiiFlmltD62MODw4ZMf96YEzJ4Xns3bRUCCzzrNDCGe2EM7q0a1yZVu8ARcGY Xt10ug+idPu0yUdeo2juRdNJZWvL21XOz2JHQ1m6EPqdd9c6xt27gvGcq4MtPs9QPero SK9ZL5Czgkcs0FrJZoeVOUS6CoH+bKwDj6hC0YCn5UsFBlQcnI48OW65IHgHggfeA/Uy Qym7VwHFcz//aXjd2iPwQMUAqNvFf6+YDRSDxha0El5VcTd+aELEbiMGjr53kzDPYBAM EV/PXotflp3xJINuTNBK0VfbSFeAY7tGC3OBfWcnK3k0E//py5SBmeyntkuEYqTwF+yl 9szA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=lC1ixMvyDPZwLn7MlQaAOGGi+E1VlbohLHb+TH0L1UU=; b=h3NOO/tS/MtPCaiZhywutw8A0Tkfc0oonM+MTtpQokXakFVOnrXwGEdFaNfgk2C2gH yeMEfFI05kl3sIeboj9mUYBcZq218YZOlhT3qiqwwDcLoM0Sh9fHRNTHuyt7HRV/dkaW KSpqdLCmGrAge9Fj75UeP8v1if/xtmvbAXW8Eghx4C0T6ZgdA0kB6mgYbAGXHFMZqkls 5jyqGwnZfBvPnhHzTKJyWg5KHKEx65fZ8RhNSinK71NBNGslRxW0NGjthPqvKRgYrJNl s1soba46mvIkQ0/mCey7M8NjvSfRUP47YZllORFFIgbUdQ5sRGqk9JvH9p8cpJPLByZw 45Yg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=iwosJfnP; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j19-v6si22685994pll.518.2018.05.25.06.13.43; Fri, 25 May 2018 06:13:43 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=iwosJfnP; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935767AbeEYNMt (ORCPT + 30 others); Fri, 25 May 2018 09:12:49 -0400 Received: from mail-wm0-f65.google.com ([74.125.82.65]:34978 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752819AbeEYNMn (ORCPT ); Fri, 25 May 2018 09:12:43 -0400 Received: by mail-wm0-f65.google.com with SMTP id o78-v6so14561013wmg.0 for ; Fri, 25 May 2018 06:12:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=lC1ixMvyDPZwLn7MlQaAOGGi+E1VlbohLHb+TH0L1UU=; b=iwosJfnPmzOJvt4iGIS2SR6kogQFSOafV1sbux+Ucpje7r8OF5ngpeVxqKpKNu6i28 zgxyZJu8ZPWFq86zOAkwHVpl7hDyYGa7aW7f0xIwMsla30nSWeQm+UO+G/i7eKKCsTFq 9gNfnYXznmIX2Scy0+FA5bhrQyi2kdLLmxDHc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=lC1ixMvyDPZwLn7MlQaAOGGi+E1VlbohLHb+TH0L1UU=; b=ZxzbvBFKuHed0vClD4IWY3xuQLsrquFU5se+RTFivbF1r/phgUoq5yfJ9XdzxLGEqZ H7ZZNAsgJKxbFwAQ9yKlB5Z4QCa29VvyTiHP9NJ87+DaNaRJWAJjkSnBRVXXeOHyjJ8A rRwE7Xq+TiwZ0zjnvr07zWATG7ssuvuBhHovgFoz0eONB3bJ+OKJ0+83DahX8L0a9AHr gKNHmeoucbpn7VKW/FoVaqnzE+mvSIBGTvyLqQiQcW04TNQqRY7yB4WPYa9K1N5P88MI OO5nTvfj3GDeIuU2UtOe9skOMwStVUUoJIx7OZvqIivx8FPeaD9EFOCYPfqhg2Gepuuk 0qsg== X-Gm-Message-State: ALKqPwdv+YG+mjOVyapduXV+Jey0uCBMpAp6p7ZLCNV8M+U7pJu4rZYw 4XMK5pMX/t6CAKtShWjDxiPjh6pvavE= X-Received: by 2002:a1c:b595:: with SMTP id e143-v6mr1627309wmf.66.1527253961558; Fri, 25 May 2018 06:12:41 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:a860:64b4:335b:c763]) by smtp.gmail.com with ESMTPSA id 4-v6sm9690948wmg.8.2018.05.25.06.12.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 25 May 2018 06:12:40 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, rjw@rjwysocki.net Cc: juri.lelli@redhat.com, dietmar.eggemann@arm.com, Morten.Rasmussen@arm.com, viresh.kumar@linaro.org, valentin.schneider@arm.com, quentin.perret@arm.com, Vincent Guittot Subject: [PATCH v5 02/10] sched/rt: add rt_rq utilization tracking Date: Fri, 25 May 2018 15:12:23 +0200 Message-Id: <1527253951-22709-3-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> References: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org schedutil governor relies on cfs_rq's util_avg to choose the OPP when cfs tasks are running. When the CPU is overloaded by cfs and rt tasks, cfs tasks are preempted by rt tasks and in this case util_avg reflects the remaining capacity but not what cfs want to use. In such case, schedutil can select a lower OPP whereas the CPU is overloaded. In order to have a more accurate view of the utilization of the CPU, we track the utilization that is "stolen" by rt tasks. rt_rq uses rq_clock_task and cfs_rq uses cfs_rq_clock_task but they are the same at the root group level, so the PELT windows of the util_sum are aligned. Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 15 ++++++++++++++- kernel/sched/pelt.c | 23 +++++++++++++++++++++++ kernel/sched/pelt.h | 7 +++++++ kernel/sched/rt.c | 8 ++++++++ kernel/sched/sched.h | 7 +++++++ 5 files changed, 59 insertions(+), 1 deletion(-) -- 2.7.4 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 6390c66..fb18bcc 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7290,6 +7290,14 @@ static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq) return false; } +static inline bool rt_rq_has_blocked(struct rq *rq) +{ + if (rq->avg_rt.util_avg) + return true; + + return false; +} + #ifdef CONFIG_FAIR_GROUP_SCHED static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq) @@ -7349,6 +7357,10 @@ static void update_blocked_averages(int cpu) if (cfs_rq_has_blocked(cfs_rq)) done = false; } + update_rt_rq_load_avg(rq_clock_task(rq), rq, 0); + /* Don't need periodic decay once load/util_avg are null */ + if (rt_rq_has_blocked(rq)) + done = false; #ifdef CONFIG_NO_HZ_COMMON rq->last_blocked_load_update_tick = jiffies; @@ -7414,9 +7426,10 @@ static inline void update_blocked_averages(int cpu) rq_lock_irqsave(rq, &rf); update_rq_clock(rq); update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq); + update_rt_rq_load_avg(rq_clock_task(rq), rq, 0); #ifdef CONFIG_NO_HZ_COMMON rq->last_blocked_load_update_tick = jiffies; - if (!cfs_rq_has_blocked(cfs_rq)) + if (!cfs_rq_has_blocked(cfs_rq) && !rt_rq_has_blocked(rq)) rq->has_blocked_load = 0; #endif rq_unlock_irqrestore(rq, &rf); diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c index e6ecbb2..213b922 100644 --- a/kernel/sched/pelt.c +++ b/kernel/sched/pelt.c @@ -309,3 +309,26 @@ int __update_load_avg_cfs_rq(u64 now, int cpu, struct cfs_rq *cfs_rq) return 0; } + +/* + * rt_rq: + * + * util_sum = \Sum se->avg.util_sum but se->avg.util_sum is not tracked + * util_sum = cpu_scale * load_sum + * runnable_load_sum = load_sum + * + */ + +int update_rt_rq_load_avg(u64 now, struct rq *rq, int running) +{ + if (___update_load_sum(now, rq->cpu, &rq->avg_rt, + running, + running, + running)) { + + ___update_load_avg(&rq->avg_rt, 1, 1); + return 1; + } + + return 0; +} diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h index 9cac73e..b2983b7 100644 --- a/kernel/sched/pelt.h +++ b/kernel/sched/pelt.h @@ -3,6 +3,7 @@ int __update_load_avg_blocked_se(u64 now, int cpu, struct sched_entity *se); int __update_load_avg_se(u64 now, int cpu, struct cfs_rq *cfs_rq, struct sched_entity *se); int __update_load_avg_cfs_rq(u64 now, int cpu, struct cfs_rq *cfs_rq); +int update_rt_rq_load_avg(u64 now, struct rq *rq, int running); /* * When a task is dequeued, its estimated utilization should not be update if @@ -38,6 +39,12 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) return 0; } +static inline int +update_rt_rq_load_avg(u64 now, struct rq *rq, int running) +{ + return 0; +} + #endif diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index ef3c4e6..b4148a9 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -5,6 +5,8 @@ */ #include "sched.h" +#include "pelt.h" + int sched_rr_timeslice = RR_TIMESLICE; int sysctl_sched_rr_timeslice = (MSEC_PER_SEC / HZ) * RR_TIMESLICE; @@ -1572,6 +1574,9 @@ pick_next_task_rt(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) rt_queue_push_tasks(rq); + update_rt_rq_load_avg(rq_clock_task(rq), rq, + rq->curr->sched_class == &rt_sched_class); + return p; } @@ -1579,6 +1584,8 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p) { update_curr_rt(rq); + update_rt_rq_load_avg(rq_clock_task(rq), rq, 1); + /* * The previous task needs to be made eligible for pushing * if it is still active @@ -2308,6 +2315,7 @@ static void task_tick_rt(struct rq *rq, struct task_struct *p, int queued) struct sched_rt_entity *rt_se = &p->rt; update_curr_rt(rq); + update_rt_rq_load_avg(rq_clock_task(rq), rq, 1); watchdog(rq, p); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 757a3ee..7a16de9 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -592,6 +592,7 @@ struct rt_rq { unsigned long rt_nr_total; int overloaded; struct plist_head pushable_tasks; + #endif /* CONFIG_SMP */ int rt_queued; @@ -847,6 +848,7 @@ struct rq { u64 rt_avg; u64 age_stamp; + struct sched_avg avg_rt; u64 idle_stamp; u64 avg_idle; @@ -2205,4 +2207,9 @@ static inline unsigned long cpu_util_cfs(struct rq *rq) return util; } + +static inline unsigned long cpu_util_rt(struct rq *rq) +{ + return rq->avg_rt.util_avg; +} #endif From patchwork Fri May 25 13:12:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 136862 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp3574058lji; Fri, 25 May 2018 06:13:24 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrc+canNM84tiqqUjL9G0Zb+mSA+YSZ7mItkaOdvOIIfihNfUr4H242h2slv7kRFJMnaurs X-Received: by 2002:a65:4ecc:: with SMTP id w12-v6mr1984678pgq.214.1527254004183; Fri, 25 May 2018 06:13:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527254004; cv=none; d=google.com; s=arc-20160816; b=OFG/vjzCm7EyHEeEm6tseYfRlhV/FQipmRL1ncpNZBYh5gW4gRZilkOk49SyTG8VQy 7YdOBDackT0xdX9AAKN+t6k7JF0oqAaAB7h5/XLW89AkX8S/W2NM6vulAE1+PbJggfxK OgQg0zJI/HQAqKVdPikbCLMwdQSPEiFOMRvW6PWMwWLaFXHgl4MGFJSZxq9D8YEePQMi FKMa3ApOXly8lWg67o4yPPHSY/NcCtT5yxGsV4GYOCAX+Muh5eVo9qZbB+npMUW/Xs4C wRICFfzEWtaaa21iDgfvW3WUNBE7VrZUtOa/+giQxcuaxvMjJ9S6MS64FsD5VinEKRjp ROWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=8UvYdRA45ghJOD7A8dFE8Ip7cboz8WKLsNwJhabdbVs=; b=paaDHdHdAQqgqXFkQv16wD/Sr49xdrdYWrrkqvIfeWR+yt9zsV8jk2Y3XZcGywOaLv ZqRC/SWB0m840O6LmQEIMZwPfKDUU0mw0LzHvlxPPniJzgoJ7ghtRa/9BCkhfcFyDskZ gadGmpmquxwRyCE1FhgxB7uJ+StA4GmWpCnusb/daoFMCoVD9DcfS3uYQgaCBuywdReZ zS802zzNJDxotsmCo29etWATq2CutiIlfv2S6zsDZqmZIQH1WW2d7jKs9oTOZEmN81lu gYBQHfTRaHcQtrIL3Xr9jc2xFx080lSiNO6YHa9cwnSVDZLtsiDoiGAJVqXnkXtj1UDZ 9S1A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=M1PJ3fOg; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e11-v6si17813905pgf.469.2018.05.25.06.13.20; Fri, 25 May 2018 06:13:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=M1PJ3fOg; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935885AbeEYNMv (ORCPT + 30 others); Fri, 25 May 2018 09:12:51 -0400 Received: from mail-wm0-f68.google.com ([74.125.82.68]:50639 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752848AbeEYNMn (ORCPT ); Fri, 25 May 2018 09:12:43 -0400 Received: by mail-wm0-f68.google.com with SMTP id t11-v6so14335426wmt.0 for ; Fri, 25 May 2018 06:12:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=8UvYdRA45ghJOD7A8dFE8Ip7cboz8WKLsNwJhabdbVs=; b=M1PJ3fOgL7L0gM44po+CWCoyuHSJtrSnJjMECXE94nqrh66UC5zQpL46U53olv5v2w b7njJ71lCvAnYDJjKlB0znGjqI8S7XZ7l34Q1psH7efwsJKLH6RuZjaxwOVE2JYVh+j7 5qSBbYgL+UM1o0nIAZWu+UH0KZgmdnbPik2pk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=8UvYdRA45ghJOD7A8dFE8Ip7cboz8WKLsNwJhabdbVs=; b=jTovmkWrs5ENUsrXd2ZHUno0n4bai4ptOl/iQXxdQvUY83WnJK2fq7AjY9rLSESHcG G9oP7R8UUQL0zj9vhOterlH3YXmH6M3oxwnMVrJ0p6qdipfSb88MyPlstUDoCfhfz86K rk85f2pIBn4RaMj0TgjQTFKMAHWO9TLI9l+3mlnzlNybX0UfgAMv3EqLeiLCxSPwMTra OjYP9a9oy45bxdOUabhZxpEnvXvcF5QjYFTKFk7UyQP36rs6N+sExNux/pYqh4Rimg8n ETGbOfEXVi3G43wHU3Ky8yIb6r43isrbW6PVTJiPliHhqnd2AtXXh8nmV1K7ZIgszAHs F/Vw== X-Gm-Message-State: ALKqPwffRvZgTmGP+0iTXxChhprCHS9fX76VE2jD/WEiR4Sc4SF5h6zQ C1EhkCCLFSZnCxL+5iGY1FZck8SjTpM= X-Received: by 2002:a1c:3449:: with SMTP id b70-v6mr1643186wma.42.1527253962692; Fri, 25 May 2018 06:12:42 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:a860:64b4:335b:c763]) by smtp.gmail.com with ESMTPSA id 4-v6sm9690948wmg.8.2018.05.25.06.12.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 25 May 2018 06:12:41 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, rjw@rjwysocki.net Cc: juri.lelli@redhat.com, dietmar.eggemann@arm.com, Morten.Rasmussen@arm.com, viresh.kumar@linaro.org, valentin.schneider@arm.com, quentin.perret@arm.com, Vincent Guittot Subject: [PATCH v5 03/10] cpufreq/schedutil: add rt utilization tracking Date: Fri, 25 May 2018 15:12:24 +0200 Message-Id: <1527253951-22709-4-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> References: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add both cfs and rt utilization when selecting an OPP for cfs tasks as rt can preempt and steal cfs's running time. Signed-off-by: Vincent Guittot --- kernel/sched/cpufreq_schedutil.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) -- 2.7.4 diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 28592b6..a84b5a5 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -56,6 +56,7 @@ struct sugov_cpu { /* The fields below are only needed when sharing a policy: */ unsigned long util_cfs; unsigned long util_dl; + unsigned long util_rt; unsigned long max; /* The field below is for single-CPU policies only: */ @@ -178,14 +179,21 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu) sg_cpu->max = arch_scale_cpu_capacity(NULL, sg_cpu->cpu); sg_cpu->util_cfs = cpu_util_cfs(rq); sg_cpu->util_dl = cpu_util_dl(rq); + sg_cpu->util_rt = cpu_util_rt(rq); } static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu) { struct rq *rq = cpu_rq(sg_cpu->cpu); + unsigned long util; - if (rq->rt.rt_nr_running) - return sg_cpu->max; + if (rq->rt.rt_nr_running) { + util = sg_cpu->max; + } else { + util = sg_cpu->util_dl; + util += sg_cpu->util_cfs; + util += sg_cpu->util_rt; + } /* * Utilization required by DEADLINE must always be granted while, for @@ -197,7 +205,7 @@ static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu) * util_cfs + util_dl as requested freq. However, cpufreq is not yet * ready for such an interface. So, we only do the latter for now. */ - return min(sg_cpu->max, (sg_cpu->util_dl + sg_cpu->util_cfs)); + return min(sg_cpu->max, util); } static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, unsigned int flags) From patchwork Fri May 25 13:12:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 136869 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp3575489lji; Fri, 25 May 2018 06:14:42 -0700 (PDT) X-Google-Smtp-Source: AB8JxZp2P4ZQL9ailQ2LITr5fnWpONWePZ+kE7Kq7DTwLwYdMIAajBtCe2omz2/Sab1c0Z0mBM73 X-Received: by 2002:a62:478d:: with SMTP id p13-v6mr2562015pfi.164.1527254082800; Fri, 25 May 2018 06:14:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527254082; cv=none; d=google.com; s=arc-20160816; b=YHDusr7lktQOANEmebGRAELh3gzMRwcspFX8GD7VTpKc6rBJPKOKKDyaRGXUIHEje3 Ymr+67HHtAvcDv7bi7dlXQUXWgA1vjDRsC9W8IOxQDq8OBHweRIxzKAwmQCP0WVxPLRh yydQkyYrXlGZCeW0b9ba13Y5UtMWLcD+S5pMF5ZNHBd45f5ORihrVhniIUOQKabgNQmt xwFdFuE9LyBr2K44bxJ1cJOmLtUPUxu2aqc/xnbrtUCvt7Jcr2E8ET4KkXbGuTUJoNSh gzm/3C2qwixoEhAzKJkLaFJ9Kx1BCf6MM2ojMWbqF3rylIJHMCiuwC5Mans+yFqoLx8z E86Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=MHe4KlnhyU3WLyiAf1k6Cbbl6wOSZJM4FxHQHlPFzHc=; b=0cfR/d4hsVrAssYgM9yD6+3UA42pRkzVUZVHhP2YsGuo3FygMgQx/o7JntVi9NoScR QLW762JXr7Spg+hSd25TK/t6i0XWWkhCL2oKCaC75Lv7bOSHOX4Qo/2JtXtRcT+BoM44 RInDGASz7aUZ0qJ3ZJ5FTNoxFRx9jYCKEUwCpyLkpa/6VvpLAz4Z1v84LVY0Navsry+u CLvHmMTd+77EAzvrLoSKAgh0yM/leQE3VbB0qcxEOfBBHVoz+1sQgKsfQCOMwwwDml0F EzteOcS0y9ImUToHJrl8qpIYOHF4AQrpCL0YaJBvLkWM8vQTqk1VANX2PDet2xwSK3c0 TtPg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=N2ieL8x6; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 24-v6si23684865pfr.242.2018.05.25.06.14.42; Fri, 25 May 2018 06:14:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=N2ieL8x6; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936160AbeEYNOl (ORCPT + 30 others); Fri, 25 May 2018 09:14:41 -0400 Received: from mail-wm0-f68.google.com ([74.125.82.68]:37120 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934540AbeEYNMp (ORCPT ); Fri, 25 May 2018 09:12:45 -0400 Received: by mail-wm0-f68.google.com with SMTP id l1-v6so14539351wmb.2 for ; Fri, 25 May 2018 06:12:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=MHe4KlnhyU3WLyiAf1k6Cbbl6wOSZJM4FxHQHlPFzHc=; b=N2ieL8x6tegBZCMjmr/iK3GHWkioo5feViuBmzfo1q4Ag6lebeQXBJtccTQIzTRNkm OWc+JPjPg05tTjgbYXAZEvtUR/4N2A9xQYoWgM5lHJTZXK2wMUnKleQ7LKMdGuG6JFrp v0Nep2xlocXdRHZwwJ65lNFuB2r0JOVkEC4gM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=MHe4KlnhyU3WLyiAf1k6Cbbl6wOSZJM4FxHQHlPFzHc=; b=iB9gZb6Wi1wFxYrZ2q8ulNwV0/oODqtqcJBLsZSxWr6xIPD64/h6qH/ec/f6cGUyF4 ExQ3Mi7crr2fVWi/SfLdL6Plt06V1MuB7yDvwOqkVZyUiOsiWAaomyt/h7TzItMF0W9r eUh4+LrOxGneZcWP1lpNRXEbEN7KKs5p+IhXIL5fIjNzlNfgDgAssPhteh9eQ96waFWR GqEeg0PHrhaodULtk+14fJz52ifBS6iQ7E5RsCicPdPGr3FOCLiqwOxza1bt0SPmuMJS KQp5Szq3IQW+8u9cTYTjJ5D/QMHhvy7S+MaHhTFW10dVHnbWSHKWMLUgqR7N8GeZUq73 8zHw== X-Gm-Message-State: ALKqPwcb/fMpQzNcC9BQ6wxNHRLeY9M/SDGP3I0WKVa+S3JNLoUK5DcS kkm1J1sCJOuRkKrUDdD3cGxNV9eg4pQ= X-Received: by 2002:a1c:8f8f:: with SMTP id r137-v6mr1628057wmd.103.1527253963922; Fri, 25 May 2018 06:12:43 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:a860:64b4:335b:c763]) by smtp.gmail.com with ESMTPSA id 4-v6sm9690948wmg.8.2018.05.25.06.12.42 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 25 May 2018 06:12:43 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, rjw@rjwysocki.net Cc: juri.lelli@redhat.com, dietmar.eggemann@arm.com, Morten.Rasmussen@arm.com, viresh.kumar@linaro.org, valentin.schneider@arm.com, quentin.perret@arm.com, Vincent Guittot Subject: [PATCH v5 04/10] sched/dl: add dl_rq utilization tracking Date: Fri, 25 May 2018 15:12:25 +0200 Message-Id: <1527253951-22709-5-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> References: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Similarly to what happens with rt tasks, cfs tasks can be preempted by dl tasks and the cfs's utilization might no longer describes the real utilization level. Current dl bandwidth reflects the requirements to meet deadline when tasks are enqueued but not the current utilization of the dl sched class. We track dl class utilization to estimate the system utilization. Signed-off-by: Vincent Guittot --- kernel/sched/deadline.c | 5 +++++ kernel/sched/fair.c | 11 ++++++++--- kernel/sched/pelt.c | 23 +++++++++++++++++++++++ kernel/sched/pelt.h | 6 ++++++ kernel/sched/sched.h | 1 + 5 files changed, 43 insertions(+), 3 deletions(-) -- 2.7.4 diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 1356afd..950b3fb 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -16,6 +16,7 @@ * Fabio Checconi */ #include "sched.h" +#include "pelt.h" struct dl_bandwidth def_dl_bandwidth; @@ -1761,6 +1762,8 @@ pick_next_task_dl(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) deadline_queue_push_tasks(rq); + update_dl_rq_load_avg(rq_clock_task(rq), rq, + rq->curr->sched_class == &dl_sched_class); return p; } @@ -1768,6 +1771,7 @@ static void put_prev_task_dl(struct rq *rq, struct task_struct *p) { update_curr_dl(rq); + update_dl_rq_load_avg(rq_clock_task(rq), rq, 1); if (on_dl_rq(&p->dl) && p->nr_cpus_allowed > 1) enqueue_pushable_dl_task(rq, p); } @@ -1784,6 +1788,7 @@ static void task_tick_dl(struct rq *rq, struct task_struct *p, int queued) { update_curr_dl(rq); + update_dl_rq_load_avg(rq_clock_task(rq), rq, 1); /* * Even when we have runtime, update_curr_dl() might have resulted in us * not being the leftmost task anymore. In that case NEED_RESCHED will diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fb18bcc..967e873 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7290,11 +7290,14 @@ static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq) return false; } -static inline bool rt_rq_has_blocked(struct rq *rq) +static inline bool others_rqs_have_blocked(struct rq *rq) { if (rq->avg_rt.util_avg) return true; + if (rq->avg_dl.util_avg) + return true; + return false; } @@ -7358,8 +7361,9 @@ static void update_blocked_averages(int cpu) done = false; } update_rt_rq_load_avg(rq_clock_task(rq), rq, 0); + update_dl_rq_load_avg(rq_clock_task(rq), rq, 0); /* Don't need periodic decay once load/util_avg are null */ - if (rt_rq_has_blocked(rq)) + if (others_rqs_have_blocked(rq)) done = false; #ifdef CONFIG_NO_HZ_COMMON @@ -7427,9 +7431,10 @@ static inline void update_blocked_averages(int cpu) update_rq_clock(rq); update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq); update_rt_rq_load_avg(rq_clock_task(rq), rq, 0); + update_dl_rq_load_avg(rq_clock_task(rq), rq, 0); #ifdef CONFIG_NO_HZ_COMMON rq->last_blocked_load_update_tick = jiffies; - if (!cfs_rq_has_blocked(cfs_rq) && !rt_rq_has_blocked(rq)) + if (!cfs_rq_has_blocked(cfs_rq) && !others_rqs_have_blocked(rq)) rq->has_blocked_load = 0; #endif rq_unlock_irqrestore(rq, &rf); diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c index 213b922..b07db80 100644 --- a/kernel/sched/pelt.c +++ b/kernel/sched/pelt.c @@ -332,3 +332,26 @@ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running) return 0; } + +/* + * dl_rq: + * + * util_sum = \Sum se->avg.util_sum but se->avg.util_sum is not tracked + * util_sum = cpu_scale * load_sum + * runnable_load_sum = load_sum + * + */ + +int update_dl_rq_load_avg(u64 now, struct rq *rq, int running) +{ + if (___update_load_sum(now, rq->cpu, &rq->avg_dl, + running, + running, + running)) { + + ___update_load_avg(&rq->avg_dl, 1, 1); + return 1; + } + + return 0; +} diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h index b2983b7..0e4f912 100644 --- a/kernel/sched/pelt.h +++ b/kernel/sched/pelt.h @@ -4,6 +4,7 @@ int __update_load_avg_blocked_se(u64 now, int cpu, struct sched_entity *se); int __update_load_avg_se(u64 now, int cpu, struct cfs_rq *cfs_rq, struct sched_entity *se); int __update_load_avg_cfs_rq(u64 now, int cpu, struct cfs_rq *cfs_rq); int update_rt_rq_load_avg(u64 now, struct rq *rq, int running); +int update_dl_rq_load_avg(u64 now, struct rq *rq, int running); /* * When a task is dequeued, its estimated utilization should not be update if @@ -45,6 +46,11 @@ update_rt_rq_load_avg(u64 now, struct rq *rq, int running) return 0; } +static inline int +update_dl_rq_load_avg(u64 now, struct rq *rq, int running) +{ + return 0; +} #endif diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 7a16de9..4526ba6 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -849,6 +849,7 @@ struct rq { u64 rt_avg; u64 age_stamp; struct sched_avg avg_rt; + struct sched_avg avg_dl; u64 idle_stamp; u64 avg_idle; From patchwork Fri May 25 13:12:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 136868 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp3575256lji; Fri, 25 May 2018 06:14:29 -0700 (PDT) X-Google-Smtp-Source: AB8JxZoo7Syeb0fenUZ+Rtr0wcqwXWrekHHbla1+1WLoQv4BzR3nrvBC2zE4l4bVVmJMKaHhaJe4 X-Received: by 2002:a65:4bca:: with SMTP id p10-v6mr2022612pgr.114.1527254069755; Fri, 25 May 2018 06:14:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527254069; cv=none; d=google.com; s=arc-20160816; b=z4011lmTeQTW1rVUe9x+0uTb7ZClAuHkOEg3IOlsqjvS2sPdZi7rYWM23/tYb+h/eR Vr54JbCWu/LegyXQsk1A0WL6wBVBGFMPFZPqihEIif5FesWslKT/MK9T8B6Ag4PmPRUb 7eEYgXqR2ursk3g84k2wKJEG2CFvoMKdRtHHzUE4BMag8ehjPXERldvQDS1PCNvz8k81 4nMqILNO3QxD8RlhT0Fx1dUVpoqyLvoSiI1BoiIwDAjCPJLs5KcLpggsKg7dekQJJ8bS Ot/BjEkRJxPD/ZtPIxc1RAmcE5fZrPmJo9pT4bZH0p9ydWu/VF6WWGYyYbm50uGQWp4a IztA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=HRWh2tpoy3DrXYcep0C0qBFSjSwqlWGhIY8YTPoAWHM=; b=lSCQzhyhyMECJv6PmRT+ZC61QWzKkc2OPPOYCENbbcbDLE5+bJlz+movCx/NQ7PZxa ofRLRK+V5B0fvag94Xaj+1PbbW2C9S11PySnZEf4ZXlaAEUUL5tESugMuvK5qlj5TEoS FGq++HNIEzKkgoaWYB40kTtgrKN7Co9W2aRvEETG6lQ4MrtFYl/1UvK8QjA5njx0FIC5 rQDQk1/Y6A+lUBuHcZrqjXO0462btVHXICpr7JLlIAYj9yJ+MgxC2sxA6UtQA8CV4wM5 9N1vDHgsqgD3wGlhtwZZz7p0Oj72CL1ETQGo3ypNWmuGQXDDG5uuv7Od6HeuMFP7/IXd cY3g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=e5mwVGfK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 24-v6si23684865pfr.242.2018.05.25.06.14.29; Fri, 25 May 2018 06:14:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=e5mwVGfK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936150AbeEYNO1 (ORCPT + 30 others); Fri, 25 May 2018 09:14:27 -0400 Received: from mail-wm0-f65.google.com ([74.125.82.65]:38705 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934595AbeEYNMq (ORCPT ); Fri, 25 May 2018 09:12:46 -0400 Received: by mail-wm0-f65.google.com with SMTP id m129-v6so14460460wmb.3 for ; Fri, 25 May 2018 06:12:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=HRWh2tpoy3DrXYcep0C0qBFSjSwqlWGhIY8YTPoAWHM=; b=e5mwVGfKJ3CCMK5+lsN3glPQyDt0FRmGR38wdpNnbX6rz2wK2CHXLX4czXlmMkdVWx f12iZ80eJiZFM5Ic7fPRloVYQt9/bFs3+LEqA7iJnr/jL2zhDM2wG03JWB32KtrDIVpG 2dOUP0SBv0687uXZsAIbYqPSexAJfcmTqsevE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=HRWh2tpoy3DrXYcep0C0qBFSjSwqlWGhIY8YTPoAWHM=; b=Ypr+bHaUIRRnINgr+HxZT9d/M4W+WTjxfipwkEcyLtiwiGcGSmY+s6fhM6gBDUY44X gdrKno9HvdxpyIw6B2/R7dcRDIppV8PRX3A8gIS6otXHNWYZarv3GOhceICHjq0/q21H cAmjkg7xM0LAOhrmzTbIN9f/2vF85aUlOXTG2X6vQmwAiYMfTRo5G+0pDXlTkAlmzliY 5h5r1+dwhyrKBmtfKxZCpE/C7IsLDPn2vv/LES5Vfl0swv+54Obt8LZpi+T+OVNXpoEe Oz/ECkuOciiTCojkcdvseqxgBJ/N1YkSof1yEGIu7ZdCSeiRVaRHLINlqgKQ2QC4VCeB SPrg== X-Gm-Message-State: ALKqPwdTmrjgbVFGrIjRA1gkUDeM2xg/F7uUeAopn/M11Bo3JVdk7mA3 H3coFchInn4fjnyc91DxWrRqsAUiauo= X-Received: by 2002:a1c:340f:: with SMTP id b15-v6mr1574544wma.129.1527253964970; Fri, 25 May 2018 06:12:44 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:a860:64b4:335b:c763]) by smtp.gmail.com with ESMTPSA id 4-v6sm9690948wmg.8.2018.05.25.06.12.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 25 May 2018 06:12:44 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, rjw@rjwysocki.net Cc: juri.lelli@redhat.com, dietmar.eggemann@arm.com, Morten.Rasmussen@arm.com, viresh.kumar@linaro.org, valentin.schneider@arm.com, quentin.perret@arm.com, Vincent Guittot Subject: [PATCH v5 05/10] cpufreq/schedutil: get max utilization Date: Fri, 25 May 2018 15:12:26 +0200 Message-Id: <1527253951-22709-6-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> References: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Now that we have both the dl class bandwidth requirement and the dl class utilization, we can use the max of the 2 values when agregating the utilization of the CPU. Signed-off-by: Vincent Guittot --- kernel/sched/sched.h | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) -- 2.7.4 diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 4526ba6..0eb07a8 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2194,7 +2194,11 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {} #ifdef CONFIG_CPU_FREQ_GOV_SCHEDUTIL static inline unsigned long cpu_util_dl(struct rq *rq) { - return (rq->dl.running_bw * SCHED_CAPACITY_SCALE) >> BW_SHIFT; + unsigned long util = (rq->dl.running_bw * SCHED_CAPACITY_SCALE) >> BW_SHIFT; + + util = max_t(unsigned long, util, READ_ONCE(rq->avg_dl.util_avg)); + + return util; } static inline unsigned long cpu_util_cfs(struct rq *rq) From patchwork Fri May 25 13:12:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 136860 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp3573591lji; Fri, 25 May 2018 06:13:01 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqLjngQK56p58fgB12GIbppW58BD3PxhKZED+ad/rhMNgBl2OESNNIirX4q30szQR9Rs/ps X-Received: by 2002:a17:902:3103:: with SMTP id w3-v6mr2532151plb.37.1527253981174; Fri, 25 May 2018 06:13:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527253981; cv=none; d=google.com; s=arc-20160816; b=HZOGQRYQqlo3MFJfiUn+iAEiQ6aEjkaURKVnSMy3oSLCIYhLb2tX0lSuAo2jMLA7tD N5td8mTtJI1e8OvtPosSs9as0I05vfqfU5iXN69ub0sP5coE0b3PB1rdQXQOvfy8unQ6 E12mIvDVK5+8mG2Sx/OF6vU62ymybMlZpU329V2u64+RopsXmLE+L0pBu8mLdEp+O9HV GRnaz2WUZYFbi0C8PWHfN95YfR0TRc5pcdiKMpoH+FXo4v3wAnQNWVEGt+Eo+E6OqQpa IUNUrfWdzXCT3bxveC7dP3MfGJrHWMgO1kLF4gfZQ+3L428C1eNg/cqDRhTsvrs/qEt+ lzEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=MEPsXVmZpqP0Xgk5Bm/UwwtvRJPoyAU0RMu6cyOEOMc=; b=dRQIRutDD4OciGHRHUEpkufdWuuULaztarjBGVRpE4SeLi51FueVG423puA78ABb7b zvwnOyloevdiVvE0P5WsNBhJ4zHYgJmKjFXNk0vJC4XUz8MN0JIsS5/rP0O6+YWvKMIu 8FxxVOJFe/h0Lsf0y1k2M/f1v/jNMaEXvPlOYeD5gL/91G1cyWAPgRGH7Tl4ajtZed5t /71eB1Zmd9Lm7/t+f59mFcb9CFTxR5lC5seJ0n7ZwWcwp9XngUpAFDd3nqMc/o+ThSX5 8pTpfAxiy3iNoueKZt1LiesFejHIaR9vY7RTt5uU5btwUFX0192lhlXVdvtgM8KnfITe Vtuw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=NmZw9LqP; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e8-v6si7069433pgu.511.2018.05.25.06.12.56; Fri, 25 May 2018 06:13:01 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=NmZw9LqP; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935998AbeEYNMw (ORCPT + 30 others); Fri, 25 May 2018 09:12:52 -0400 Received: from mail-wm0-f67.google.com ([74.125.82.67]:39024 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935167AbeEYNMr (ORCPT ); Fri, 25 May 2018 09:12:47 -0400 Received: by mail-wm0-f67.google.com with SMTP id f8-v6so14525754wmc.4 for ; Fri, 25 May 2018 06:12:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=MEPsXVmZpqP0Xgk5Bm/UwwtvRJPoyAU0RMu6cyOEOMc=; b=NmZw9LqPm2pQWUN02Q9PrFHTxcHvvUOKBGz64KaaBegj/OXjjs332ZswkdmX0nSn/e NxJbnMiqHCqED5UFO9Bof3KWfM3A6O+p3BAWy2xw8MJmOe4f6PilxgAkZKCMJfUyJ2U3 UMTGdgTYfFo5GInGi7QL99dyAJGeI/swlC9HM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=MEPsXVmZpqP0Xgk5Bm/UwwtvRJPoyAU0RMu6cyOEOMc=; b=IxKDHHENVarYUKg0rxlmfaIUVibHARREorLl/+6yJMdDU2u2C3QtLvzu4ulpURNfl+ 1jucND//LGDEWbI6TQFZZor6HISAA3kYN4toMNCsjQ7yWFgCYqV/b4Nr+RCUzUVM19ck /AWMvIQadX6wNbln2/tekTlsBqEGfDmn1NndjKKuNTIR9Jr91rOXVeNlFOw4z7+MqtIa Qd4fQkfsbSRb8j3RRU4opOmutPfPoe8xIZ0kFK4KZSHtKSxOAcuSg6UbXDu2tsayyLQp kaJQw+BREXCnVnECfDKodf/Gsy9B+ahD+2OKtjddu/99eQ46Y13hoNScWf7k7v5NQVDB mACA== X-Gm-Message-State: ALKqPwesF8LcOHAMrvriPn8eBFAAB0c8er/ipBp6XbA9EG6x/D8mOrXU BkGskWpiuMqpOGeweyEF0yHO7A== X-Received: by 2002:a1c:d755:: with SMTP id o82-v6mr1558388wmg.71.1527253965986; Fri, 25 May 2018 06:12:45 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:a860:64b4:335b:c763]) by smtp.gmail.com with ESMTPSA id 4-v6sm9690948wmg.8.2018.05.25.06.12.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 25 May 2018 06:12:45 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, rjw@rjwysocki.net Cc: juri.lelli@redhat.com, dietmar.eggemann@arm.com, Morten.Rasmussen@arm.com, viresh.kumar@linaro.org, valentin.schneider@arm.com, quentin.perret@arm.com, Vincent Guittot Subject: [PATCH v5 06/10] sched: remove rt and dl from sched_avg Date: Fri, 25 May 2018 15:12:27 +0200 Message-Id: <1527253951-22709-7-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> References: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org the utilization level of the CPU by rt and dl tasks is now tracked with PELT so we can use these metrics and remove them from the rt_avg which will track only interrupt and stolen virtual time. Signed-off-by: Vincent Guittot --- kernel/sched/deadline.c | 2 -- kernel/sched/fair.c | 2 ++ kernel/sched/pelt.c | 2 +- kernel/sched/rt.c | 2 -- 4 files changed, 3 insertions(+), 5 deletions(-) -- 2.7.4 diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 950b3fb..da839e7 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1180,8 +1180,6 @@ static void update_curr_dl(struct rq *rq) curr->se.exec_start = now; cgroup_account_cputime(curr, delta_exec); - sched_rt_avg_update(rq, delta_exec); - if (dl_entity_is_special(dl_se)) return; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 967e873..da75eda 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7562,6 +7562,8 @@ static unsigned long scale_rt_capacity(int cpu) used = div_u64(avg, total); + used += READ_ONCE(rq->avg_rt.util_avg); + used += READ_ONCE(rq->avg_dl.util_avg); if (likely(used < SCHED_CAPACITY_SCALE)) return SCHED_CAPACITY_SCALE - used; diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c index b07db80..3d5bd3a 100644 --- a/kernel/sched/pelt.c +++ b/kernel/sched/pelt.c @@ -237,7 +237,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load, unsigned long runna */ sa->load_avg = div_u64(load * sa->load_sum, divider); sa->runnable_load_avg = div_u64(runnable * sa->runnable_load_sum, divider); - sa->util_avg = sa->util_sum / divider; + WRITE_ONCE(sa->util_avg, sa->util_sum / divider); } /* diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index b4148a9..3393c63 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -970,8 +970,6 @@ static void update_curr_rt(struct rq *rq) curr->se.exec_start = now; cgroup_account_cputime(curr, delta_exec); - sched_rt_avg_update(rq, delta_exec); - if (!rt_bandwidth_enabled()) return; From patchwork Fri May 25 13:12:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 136866 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp3574748lji; Fri, 25 May 2018 06:14:01 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqWfpnXnq+G517Y7sfzLfks1VPgd+xBFDMg59SH0xE+Zai48oNEo+qSN47lQnR40AZXBqBv X-Received: by 2002:a63:b34e:: with SMTP id x14-v6mr2017760pgt.70.1527254041554; Fri, 25 May 2018 06:14:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527254041; cv=none; d=google.com; s=arc-20160816; b=a/s6Q2pXCzO7Oi8/+utB0aSXneypOAVjgOzcALtVR2AVTWMiDGdwUVGyIUbgZ6sLCd ZQJ6Rky4u99lnQv/qWR9w349dyV3Cb/QaKwB5T7yyx2T72m6wDP4nLvIPyEte6ZNn+I8 u+G+jZDMTe4NCCwDwfW7PB6f10DMTenY6M4SXd/kdlbzXfJNPVoFemkaFl/i21GPV42f FW619v8FW7svunS/mkTOMGVhvQ20dYge4Cgq2pqTQIoxEmZqJ4Goo061jK/iI9qAw9/T Ouf3Y9+KscV9z5z5rJREidi9vspU7im0QlDvcIACAsSuGgH5PvSFbj8Fq7I5P0tBiKfs 0xzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=nTSic1dLCSqiM5XnJDa6S6TYVz1k4vZ89pFtGsiTIyo=; b=ZXNVXkAHreZ60qnCltbkKy0UbJxFpwb/MN7LgvtqqxaS4El5fJFgKWN3/2BQI7DOUI SLObBPF/acdZrFSy5Ue0Z7Lf1lKFEiX/B52VhwL7sWyyahlhD8kzdRiSxHh4jCgLTT1F Z0iCl6X0bCnlSbnb9cGcvCXaxWAhu9mJJz1zmS9deht7cE5UOzLTOR0NRg08HluX5Q4M 1FYItatH2XVb3/L6KIoe7jDlGUOZuHdeI4ZJfToGJ4lKGAkAqVBrJbqEB6OEwFagtucj fu4nmNDlblemlq84fHHDu3YEAGgBi/bWcAanlaCEPMD81UEYk2fEjLyTLGMr5lAubjO0 meNw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=MTKMC7nY; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t1-v6si6931527plq.341.2018.05.25.06.14.01; Fri, 25 May 2018 06:14:01 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=MTKMC7nY; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936146AbeEYNN7 (ORCPT + 30 others); Fri, 25 May 2018 09:13:59 -0400 Received: from mail-wm0-f68.google.com ([74.125.82.68]:54296 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935442AbeEYNMs (ORCPT ); Fri, 25 May 2018 09:12:48 -0400 Received: by mail-wm0-f68.google.com with SMTP id f6-v6so14298966wmc.4 for ; Fri, 25 May 2018 06:12:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=nTSic1dLCSqiM5XnJDa6S6TYVz1k4vZ89pFtGsiTIyo=; b=MTKMC7nYyq7dPrGlSxYJCPafm9jZgE5ftrcLFNHX3BZpSSB1bRDYXniV772jTfiddW jdqvj3U1L0kzAtnWHW72rOks7fSaJ1rdjLu3HGs+WY02GjzuJYO7c+HT2xolyloFqfys hb5y4mA04JTA22w8DzJw767S5IeTRw7lXKa+s= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=nTSic1dLCSqiM5XnJDa6S6TYVz1k4vZ89pFtGsiTIyo=; b=Elx1WKTL62NEzaCTnI2Nk2j2UUeXtKWeSNdz0qUBevN5X4boQlPd8mVSSZBNITq5Zx 5vfMlD9k6cEGk8d+UsJzv4eRrZOXZcK8PMYoYsRJJmV0z6qjlDnv6z9NKzhRuRbV53XK Zl0IBk4GseV2LpYuVOj8zdiC2HElyroucQESyTZ6BM+HQ0l70q7Xkwyi0Fx7ry+jXJ/f XOr+WUc1V9+2Ay3f7rYFrorP/q/gIjZo9caoxcr8jix8sS4jTwxO747NEbgc1Ir3/pfC wuGr4BowP1jHmaiUM1F5jbdg/oH6Cl/izy9+QyZ+GbM4y81b6paxLXZVQnxqtlNlh3cg yyDQ== X-Gm-Message-State: ALKqPwd8Ivhayo7YUmzI7a/JLITn+OzswDL29b3yKuPUbcazq4pR3lym dPaXquMcM7XpNlYMmA1jYlh2Pg== X-Received: by 2002:a1c:e64e:: with SMTP id d75-v6mr1655927wmh.101.1527253966998; Fri, 25 May 2018 06:12:46 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:a860:64b4:335b:c763]) by smtp.gmail.com with ESMTPSA id 4-v6sm9690948wmg.8.2018.05.25.06.12.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 25 May 2018 06:12:46 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, rjw@rjwysocki.net Cc: juri.lelli@redhat.com, dietmar.eggemann@arm.com, Morten.Rasmussen@arm.com, viresh.kumar@linaro.org, valentin.schneider@arm.com, quentin.perret@arm.com, Vincent Guittot Subject: [PATCH v5 07/10] sched/irq: add irq utilization tracking Date: Fri, 25 May 2018 15:12:28 +0200 Message-Id: <1527253951-22709-8-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> References: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org interrupt and steal time are the only remaining activities tracked by rt_avg. Like for sched classes, we can use PELT to track their average utilization of the CPU. But unlike sched class, we don't track when entering/leaving interrupt; Instead, we take into account the time spent under interrupt context when we update rqs' clock (rq_clock_task). This also means that we have to decay the normal context time and account for interrupt time during the update. That's also important to note that because rq_clock == rq_clock_task + interrupt time and rq_clock_task is used by a sched class to compute its utilization, the util_avg of a sched class only reflects the utilization of the time spent in normal context and not of the whole time of the CPU. The utilization of interrupt gives an more accurate level of utilization of CPU. The CPU utilization is : avg_irq + (1 - avg_irq / max capacity) * /Sum avg_rq Most of the time, avg_irq is small and neglictible so the use of the approximation CPU utilization = /Sum avg_rq was enough Signed-off-by: Vincent Guittot --- kernel/sched/core.c | 4 +++- kernel/sched/fair.c | 26 +++++++------------------- kernel/sched/pelt.c | 38 ++++++++++++++++++++++++++++++++++++++ kernel/sched/pelt.h | 7 +++++++ kernel/sched/sched.h | 1 + 5 files changed, 56 insertions(+), 20 deletions(-) -- 2.7.4 diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d155518..ab58288 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -16,6 +16,8 @@ #include "../workqueue_internal.h" #include "../smpboot.h" +#include "pelt.h" + #define CREATE_TRACE_POINTS #include @@ -184,7 +186,7 @@ static void update_rq_clock_task(struct rq *rq, s64 delta) #if defined(CONFIG_IRQ_TIME_ACCOUNTING) || defined(CONFIG_PARAVIRT_TIME_ACCOUNTING) if ((irq_delta + steal) && sched_feat(NONTASK_CAPACITY)) - sched_rt_avg_update(rq, irq_delta + steal); + update_irq_load_avg(rq, irq_delta + steal); #endif } diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index da75eda..1bb3379 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5323,8 +5323,6 @@ static void cpu_load_update(struct rq *this_rq, unsigned long this_load, this_rq->cpu_load[i] = (old_load * (scale - 1) + new_load) >> i; } - - sched_avg_update(this_rq); } /* Used instead of source_load when we know the type == 0 */ @@ -7298,6 +7296,9 @@ static inline bool others_rqs_have_blocked(struct rq *rq) if (rq->avg_dl.util_avg) return true; + if (rq->avg_irq.util_avg) + return true; + return false; } @@ -7362,6 +7363,7 @@ static void update_blocked_averages(int cpu) } update_rt_rq_load_avg(rq_clock_task(rq), rq, 0); update_dl_rq_load_avg(rq_clock_task(rq), rq, 0); + update_irq_load_avg(rq, 0); /* Don't need periodic decay once load/util_avg are null */ if (others_rqs_have_blocked(rq)) done = false; @@ -7432,6 +7434,7 @@ static inline void update_blocked_averages(int cpu) update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq); update_rt_rq_load_avg(rq_clock_task(rq), rq, 0); update_dl_rq_load_avg(rq_clock_task(rq), rq, 0); + update_irq_load_avg(rq, 0); #ifdef CONFIG_NO_HZ_COMMON rq->last_blocked_load_update_tick = jiffies; if (!cfs_rq_has_blocked(cfs_rq) && !others_rqs_have_blocked(rq)) @@ -7544,24 +7547,9 @@ static inline int get_sd_load_idx(struct sched_domain *sd, static unsigned long scale_rt_capacity(int cpu) { struct rq *rq = cpu_rq(cpu); - u64 total, used, age_stamp, avg; - s64 delta; - - /* - * Since we're reading these variables without serialization make sure - * we read them once before doing sanity checks on them. - */ - age_stamp = READ_ONCE(rq->age_stamp); - avg = READ_ONCE(rq->rt_avg); - delta = __rq_clock_broken(rq) - age_stamp; - - if (unlikely(delta < 0)) - delta = 0; - - total = sched_avg_period() + delta; - - used = div_u64(avg, total); + unsigned long used; + used = READ_ONCE(rq->avg_irq.util_avg); used += READ_ONCE(rq->avg_rt.util_avg); used += READ_ONCE(rq->avg_dl.util_avg); if (likely(used < SCHED_CAPACITY_SCALE)) diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c index 3d5bd3a..d2e4f21 100644 --- a/kernel/sched/pelt.c +++ b/kernel/sched/pelt.c @@ -355,3 +355,41 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running) return 0; } + +/* + * irq: + * + * util_sum = \Sum se->avg.util_sum but se->avg.util_sum is not tracked + * util_sum = cpu_scale * load_sum + * runnable_load_sum = load_sum + * + */ + +int update_irq_load_avg(struct rq *rq, u64 running) +{ + int ret = 0; + /* + * We know the time that has been used by interrupt since last update + * but we don't when. Let be pessimistic and assume that interrupt has + * happened just before the update. This is not so far from reality + * because interrupt will most probably wake up task and trig an update + * of rq clock during which the metric si updated. + * We start to decay with normal context time and then we add the + * interrupt context time. + * We can safely remove running from rq->clock because + * rq->clock += delta with delta >= running + */ + ret = ___update_load_sum(rq->clock - running, rq->cpu, &rq->avg_irq, + 0, + 0, + 0); + ret += ___update_load_sum(rq->clock, rq->cpu, &rq->avg_irq, + 1, + 1, + 1); + + if (ret) + ___update_load_avg(&rq->avg_irq, 1, 1); + + return ret; +} diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h index 0e4f912..0ce9a5a 100644 --- a/kernel/sched/pelt.h +++ b/kernel/sched/pelt.h @@ -5,6 +5,7 @@ int __update_load_avg_se(u64 now, int cpu, struct cfs_rq *cfs_rq, struct sched_e int __update_load_avg_cfs_rq(u64 now, int cpu, struct cfs_rq *cfs_rq); int update_rt_rq_load_avg(u64 now, struct rq *rq, int running); int update_dl_rq_load_avg(u64 now, struct rq *rq, int running); +int update_irq_load_avg(struct rq *rq, u64 running); /* * When a task is dequeued, its estimated utilization should not be update if @@ -51,6 +52,12 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running) { return 0; } + +static inline int +update_irq_load_avg(struct rq *rq, u64 running) +{ + return 0; +} #endif diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 0eb07a8..f7e8d5b 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -850,6 +850,7 @@ struct rq { u64 age_stamp; struct sched_avg avg_rt; struct sched_avg avg_dl; + struct sched_avg avg_irq; u64 idle_stamp; u64 avg_idle; From patchwork Fri May 25 13:12:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 136865 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp3574458lji; Fri, 25 May 2018 06:13:46 -0700 (PDT) X-Google-Smtp-Source: AB8JxZoiigSWwhlx2rMmd4t6Ff83XPG6XVbUQTUWP02ijHWEZlSmFVQOQKLrsgx9sheq+pDCvtwi X-Received: by 2002:a17:902:ba93:: with SMTP id k19-v6mr2498894pls.379.1527254025966; Fri, 25 May 2018 06:13:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527254025; cv=none; d=google.com; s=arc-20160816; b=wxVys037+I54mJsNCUq0L04OpGx2bArtAZeXqOgXlMLO7FPB7IAFhD22z/Q8aQwbsL vAcALgtMtBGn0aWyZISj9PN8yhxh6yQcF7tknyWkdMYsRXYN+yam49GD04M/XwaMifP9 5a92DkOlSuFfB/MBK5V0WgCU0VnooHIWrKXUUtXwOYE+mAVCKHrhaUOZAo1SDoPlru3v omoDCytuGwr6vVeaaGxQTO8XDeULStJ/V5D30HNE4BFqKvYb3SyptGTumAUtyWNa0pG+ Llq6r6HJe7F98l0I8vvS6bjnG4SI2aQVmFpfO/SXK2fZfQ3kvlUambNmCT2BhVX3+TrJ +Zew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=ckwiXhqSqL/YEUNswdXJMutapFI7vaVSbxoivzluzcc=; b=tJo+e3+QDo4vHxsea+nEAZs6Counn2XV+b7b4rzHA/mC5jzK4eCG1I+cKcwYYtxrRJ uBVH8128uF+hmBAXC8nNs2lh5CqYLoxlE/sjoVvuvEXOur3IzbYhv6joDABTTtK16WES 5ZTkaBHkadx0Lduz5MC2zYh7yd6usvJ2aIGmwo7abp7mYK6HakrXxnZwU0yYehGj4qsq pEmssYNx5OpJ/q/W2dvI/ZJ4/fNEnholCdxirHFJGiBAGfjXYvfpB2yttgTF3THtMtyh DDiiYf7cU7D+Tq7oWrGMAYucU/IEnNI98OxH9ZSUjPxZ603gjQXU1oyEdrByb8cRjyRA +XiQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=egaIChKA; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v22-v6si2566009ply.328.2018.05.25.06.13.44; Fri, 25 May 2018 06:13:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=egaIChKA; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936129AbeEYNNm (ORCPT + 30 others); Fri, 25 May 2018 09:13:42 -0400 Received: from mail-wm0-f67.google.com ([74.125.82.67]:52388 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935724AbeEYNMt (ORCPT ); Fri, 25 May 2018 09:12:49 -0400 Received: by mail-wm0-f67.google.com with SMTP id 18-v6so8722041wml.2 for ; Fri, 25 May 2018 06:12:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ckwiXhqSqL/YEUNswdXJMutapFI7vaVSbxoivzluzcc=; b=egaIChKAzQujU2x9a33Nzc8IEPOnS4NiWi82p4xZiHi+i44CZrLajoyv0Dy7p+U0Pv O+JLuviVI76XpmtbQUpz8v9Vd3EokBE3SO3HEJR1xqv7LBmv7p+2B4asvgMVRykjVX4P yL3+ara2H8us+6o4VQenahEhtiWfODJxPLruo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ckwiXhqSqL/YEUNswdXJMutapFI7vaVSbxoivzluzcc=; b=izmzKUA/vTOQZXo9cyLd7E5Ah2B6iRgQ3pLRqvtMLG9LDg2NO2IZCBkS6f4Al4A9Cz C8hgQbQL1bM8U4xx9JBYCk17NOSSUxVimWDk8xyGUASdu/UxnnBmwEx/lzYsjDDVsorm jQPoa+oe2uJTI4ATT1fSaKsYQTqYPbl4V+YqQkXsCfB72Hp7oeKv6RF6uHdEMJQHihen As/aRwkwc3ILdLH2wvTUYBAz7cRXiS6CnlqgKSm8wJy0eESgAWCJaIGSgMsPDiPEmgA2 gn8+pgrOrMaKGukRSuTB9JOH616BvV5GWsFCRXBOom8VJkvKiJEhFfc5zBx0HaTpBTa6 85LA== X-Gm-Message-State: ALKqPwdIcnvOhLMx4YTn9WrOdGc2UQkV65EwoxQP7pr691+pqCQ0UYnG Ias4oI+6WxNgCCGZh9qFlcX7MMAxYHY= X-Received: by 2002:a1c:2350:: with SMTP id j77-v6mr1610292wmj.108.1527253968092; Fri, 25 May 2018 06:12:48 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:a860:64b4:335b:c763]) by smtp.gmail.com with ESMTPSA id 4-v6sm9690948wmg.8.2018.05.25.06.12.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 25 May 2018 06:12:47 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, rjw@rjwysocki.net Cc: juri.lelli@redhat.com, dietmar.eggemann@arm.com, Morten.Rasmussen@arm.com, viresh.kumar@linaro.org, valentin.schneider@arm.com, quentin.perret@arm.com, Vincent Guittot Subject: [PATCH v5 08/10] cpufreq/schedutil: take into account interrupt Date: Fri, 25 May 2018 15:12:29 +0200 Message-Id: <1527253951-22709-9-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> References: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The time spent under interrupt can be significant but it is not reflected in the utilization of CPU when deciding to choose an OPP. Now that we have access to this metric, schedutil can take it into account when selecting the OPP for a CPU. The CPU utilization is : irq util_avg + (1 - irq util_avg / max capacity ) * /Sum rq util_avg A test with iperf on hikey (octo arm64) gives: iperf -c server_address -r -t 5 w/o patch w/ patch Tx 276 Mbits/sec 304 Mbits/sec +10% Rx 299 Mbits/sec 328 Mbits/sec +09% 8 iterations stdev is lower than 1% Only WFI idle state is enable (shallowest diel state) Signed-off-by: Vincent Guittot --- kernel/sched/cpufreq_schedutil.c | 10 ++++++++++ kernel/sched/sched.h | 5 +++++ 2 files changed, 15 insertions(+) -- 2.7.4 diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index a84b5a5..06f2080 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -57,6 +57,7 @@ struct sugov_cpu { unsigned long util_cfs; unsigned long util_dl; unsigned long util_rt; + unsigned long util_irq; unsigned long max; /* The field below is for single-CPU policies only: */ @@ -180,6 +181,7 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu) sg_cpu->util_cfs = cpu_util_cfs(rq); sg_cpu->util_dl = cpu_util_dl(rq); sg_cpu->util_rt = cpu_util_rt(rq); + sg_cpu->util_irq = cpu_util_irq(rq); } static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu) @@ -190,9 +192,17 @@ static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu) if (rq->rt.rt_nr_running) { util = sg_cpu->max; } else { + /* Sum rq utilization*/ util = sg_cpu->util_dl; util += sg_cpu->util_cfs; util += sg_cpu->util_rt; + + /* Weight rq's utilization to the normal context */ + util *= (sg_cpu->max - sg_cpu->util_irq); + util /= sg_cpu->max; + + /* Add interrupt utilization */ + util += sg_cpu->util_irq; } /* diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index f7e8d5b..718c55d 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2218,4 +2218,9 @@ static inline unsigned long cpu_util_rt(struct rq *rq) { return rq->avg_rt.util_avg; } + +static inline unsigned long cpu_util_irq(struct rq *rq) +{ + return rq->avg_irq.util_avg; +} #endif From patchwork Fri May 25 13:12:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 136863 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp3574156lji; Fri, 25 May 2018 06:13:29 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpHdlDiTs8r6vxmr6A29YZtIxH9Nvs7h69QzP+jeS4IqcGclBn8OA7XHiIpAbQWyS8q0Slq X-Received: by 2002:a62:399c:: with SMTP id u28-v6mr2554445pfj.95.1527254009262; Fri, 25 May 2018 06:13:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527254009; cv=none; d=google.com; s=arc-20160816; b=pWL5TfQnGJblo4g3sJxPlW0Y7MdgS8PAtTXG8kS/FKLnAJn3oxOaOMHf+dHVs9/5TQ wSPmJgI5acLVZWvJBiwwmnEZszcsc8WClJVH9b9BUlkQHYJrEyzqlYioI9MAXy7yIvf/ 4PX9W5BboS+hML+D0YZOQ66/ShVceO2ZoUI3MnJ5qQSgvKZPI7oH5daaYww05VjcrYDL Cw6qOWpLgo0lX0mo321Fn82fHcElYs8CzBu2/koMTa3EimoMmZ4klStauY/azG9z6Ll/ jtHXmgfKmO1laWsus7Nz9g+eQkbotitsVt8NspBv1WaaTNgDxKKb7DPX7hIPP22UPMSu HvdQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=Vu8j+hEom/7KHGLFFSdDkvlDJbms8n0XGAtqTkz4KXU=; b=BDQgd3tGfU4r0i3piKVi5un/T8AEF5aXQ2o6k9b5Rl+2D8COUlUD1dyn2ImKaPNKN8 g20fQvMeRNwjeUh/647anhszJUm2H1RlyX5/TwsTvBuwVlj2PwvcZcjG2D2fBwOC+Ti1 O84quha1+yHwrj/ihOWBEG19V7xTz5l36T9xZoHnM2C13nUBWwgk4MBLAD9w397mBngx K9MYmyaVgDzcGsaZUzi6aiqay8DIP4ucpqaaOs5u6nmxvk4dun0B2S1VuqR/YxdgBWjD a+osIV8D340rSOPuk7JKp0IDQLtKRbbB9m8XLCadf6kae1UEkf7l4K4W7JXMklVNL6LU m0Jw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=MnWzjVVr; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f3-v6si24301940plf.436.2018.05.25.06.13.27; Fri, 25 May 2018 06:13:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=MnWzjVVr; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936111AbeEYNNY (ORCPT + 30 others); Fri, 25 May 2018 09:13:24 -0400 Received: from mail-wm0-f68.google.com ([74.125.82.68]:54313 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935768AbeEYNMu (ORCPT ); Fri, 25 May 2018 09:12:50 -0400 Received: by mail-wm0-f68.google.com with SMTP id f6-v6so14299316wmc.4 for ; Fri, 25 May 2018 06:12:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Vu8j+hEom/7KHGLFFSdDkvlDJbms8n0XGAtqTkz4KXU=; b=MnWzjVVrtHXeC5AJi48XaUwakle5THp3aPVSoVtiJRVXt9yE6HCsh7yF+VA8orMq64 ZWw59X+cdzSqG/LaSo9INZ9hEgxR200tC6kbHz8ztivOgDqL7b0c/85OJMxT3Zo+QABo BkfWvBHClvdDDByu2ZXDzHHs8TEFjwKtN7nYg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Vu8j+hEom/7KHGLFFSdDkvlDJbms8n0XGAtqTkz4KXU=; b=dWo/nxIaAaZPjbEVJ+K2FVmU6O/Ed2M3U1XkkoeTzZB68ad+KRrXAG7elr7JmQNanW 2wIixN4Zo2IhaSFKX8xu6bxgNhf1J0kL6fPqXN4gEvdHLxaHiLTCSZ8P4mmhhEugsooz 6CDnhWSna56LRBrGy+FUCNKFDWS4b3/H9ubrChz6QQF6WeafkaxHkLzb+Fycd1CuIlxu H31NvZ8u5Wb/93j8xKtX/HV/0FcX2wjw3h1yjfR1EDXyKntlyD8bA7A6mgqDPOXKGdjc zEIt/MGOo4RE9UIBt7OCNO1TeKqFU/4fJ0uGR7nN74++MQcjsHbefvwgQZiqiUkvM5Wb E4jA== X-Gm-Message-State: ALKqPwdx628ISpiZ+PG68pMF3OwYMrBK4yB6IUYcQTMGVvKVE5VPKvrm CNNPW+5bgzGSRSGl52jIoVOmsA== X-Received: by 2002:a1c:e54a:: with SMTP id c71-v6mr1584157wmh.55.1527253969182; Fri, 25 May 2018 06:12:49 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:a860:64b4:335b:c763]) by smtp.gmail.com with ESMTPSA id 4-v6sm9690948wmg.8.2018.05.25.06.12.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 25 May 2018 06:12:48 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, rjw@rjwysocki.net Cc: juri.lelli@redhat.com, dietmar.eggemann@arm.com, Morten.Rasmussen@arm.com, viresh.kumar@linaro.org, valentin.schneider@arm.com, quentin.perret@arm.com, Vincent Guittot Subject: [PATCH v5 09/10] sched: remove rt_avg code Date: Fri, 25 May 2018 15:12:30 +0200 Message-Id: <1527253951-22709-10-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> References: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org rt_avg is no more used anywhere so we can remove all related code Signed-off-by: Vincent Guittot --- kernel/sched/core.c | 26 -------------------------- kernel/sched/sched.h | 17 ----------------- 2 files changed, 43 deletions(-) -- 2.7.4 diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ab58288..213d277 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -650,23 +650,6 @@ bool sched_can_stop_tick(struct rq *rq) return true; } #endif /* CONFIG_NO_HZ_FULL */ - -void sched_avg_update(struct rq *rq) -{ - s64 period = sched_avg_period(); - - while ((s64)(rq_clock(rq) - rq->age_stamp) > period) { - /* - * Inline assembly required to prevent the compiler - * optimising this loop into a divmod call. - * See __iter_div_u64_rem() for another example of this. - */ - asm("" : "+rm" (rq->age_stamp)); - rq->age_stamp += period; - rq->rt_avg /= 2; - } -} - #endif /* CONFIG_SMP */ #if defined(CONFIG_RT_GROUP_SCHED) || (defined(CONFIG_FAIR_GROUP_SCHED) && \ @@ -5710,13 +5693,6 @@ void set_rq_offline(struct rq *rq) } } -static void set_cpu_rq_start_time(unsigned int cpu) -{ - struct rq *rq = cpu_rq(cpu); - - rq->age_stamp = sched_clock_cpu(cpu); -} - /* * used to mark begin/end of suspend/resume: */ @@ -5834,7 +5810,6 @@ static void sched_rq_cpu_starting(unsigned int cpu) int sched_cpu_starting(unsigned int cpu) { - set_cpu_rq_start_time(cpu); sched_rq_cpu_starting(cpu); sched_tick_start(cpu); return 0; @@ -6102,7 +6077,6 @@ void __init sched_init(void) #ifdef CONFIG_SMP idle_thread_set_boot_cpu(); - set_cpu_rq_start_time(smp_processor_id()); #endif init_sched_fair_class(); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 718c55d..1929db7 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -846,8 +846,6 @@ struct rq { struct list_head cfs_tasks; - u64 rt_avg; - u64 age_stamp; struct sched_avg avg_rt; struct sched_avg avg_dl; struct sched_avg avg_irq; @@ -1710,11 +1708,6 @@ extern const_debug unsigned int sysctl_sched_time_avg; extern const_debug unsigned int sysctl_sched_nr_migrate; extern const_debug unsigned int sysctl_sched_migration_cost; -static inline u64 sched_avg_period(void) -{ - return (u64)sysctl_sched_time_avg * NSEC_PER_MSEC / 2; -} - #ifdef CONFIG_SCHED_HRTICK /* @@ -1751,8 +1744,6 @@ unsigned long arch_scale_freq_capacity(int cpu) #endif #ifdef CONFIG_SMP -extern void sched_avg_update(struct rq *rq); - #ifndef arch_scale_cpu_capacity static __always_inline unsigned long arch_scale_cpu_capacity(struct sched_domain *sd, int cpu) @@ -1763,12 +1754,6 @@ unsigned long arch_scale_cpu_capacity(struct sched_domain *sd, int cpu) return SCHED_CAPACITY_SCALE; } #endif - -static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta) -{ - rq->rt_avg += rt_delta * arch_scale_freq_capacity(cpu_of(rq)); - sched_avg_update(rq); -} #else #ifndef arch_scale_cpu_capacity static __always_inline @@ -1777,8 +1762,6 @@ unsigned long arch_scale_cpu_capacity(void __always_unused *sd, int cpu) return SCHED_CAPACITY_SCALE; } #endif -static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta) { } -static inline void sched_avg_update(struct rq *rq) { } #endif struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf) From patchwork Fri May 25 13:12:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 136861 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp3573953lji; Fri, 25 May 2018 06:13:18 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrajPMpRqbcIzWMEmzexndx8sZ8V9GBT6dtiGmAphV8Zz7ebHfwcGABtQxWm2KG1Bd047kb X-Received: by 2002:a17:902:ba93:: with SMTP id k19-v6mr2497167pls.379.1527253997979; Fri, 25 May 2018 06:13:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527253997; cv=none; d=google.com; s=arc-20160816; b=OanSSUDvCa3skMSKir+MWsFUpKSiARTNdiBgpfAk0syRlM/Tl+i19PP/ZI3dwHDF74 bAmxFK7AteWdI54YwFSWAqFPAJoMdMKUWiDNbnDEMNxvb/Sw2e0ERqaNii/LdhZLDhTR Ln9Eg5S4Tv2gPvWKkVGNo1qvFFQuL6sOYVEFIzYHXu1fLe+sarkCu2CK95lYHGdflk/0 DOTpIlz2z21g2GCC46AnLVzIAHuJx2zc77bErPQ8ExXOiebRvAIOQyUqsLVXviuI8iyH ACILZ2gg5ju9kehrHQ5Y9nu7pj/Qf07hYdEF4eUAc0tt4I8ri0GgwnHtlfsLt8boz/3t YOEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=6Izc5movT2lxdD5y5CtHVpwDbex1pTsZvvAwBSYatsM=; b=CV4OsX8578fFkFvTAyT8Kz0ehHZWQf7Vk7j0Y1LxMps4T8XFdSUgCQqVbrl3O9xN7X 3ok0ivlwU9PKIfN1dTTuEN7rF+FS7O6/LkuoTes7GL8NDIxXKwtcRXhLMso/JKUp+sHq 5+l55xMI7EzDPdpehVNOt55L+bojumTjoT8/grp4fagvxcq/U9AMfKOh2gsM1/fi1oCc 9W6TCWFlh8VmsPSjmXAdHDFklvd50tDcpqNH8v1VoOjb6d/32LJZz9EtMATsVh5V3fm5 9ZKC/IQPgEqPJLbqG6pk3YrGOaLdbT8oQPnaOIgqogt5DkK9Z5rfuHInOhw/o5TBXI+L jBFw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=OEPWVLMn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e8-v6si7069433pgu.511.2018.05.25.06.13.09; Fri, 25 May 2018 06:13:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=OEPWVLMn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936074AbeEYNNH (ORCPT + 30 others); Fri, 25 May 2018 09:13:07 -0400 Received: from mail-wm0-f67.google.com ([74.125.82.67]:50691 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935892AbeEYNMv (ORCPT ); Fri, 25 May 2018 09:12:51 -0400 Received: by mail-wm0-f67.google.com with SMTP id t11-v6so14336622wmt.0 for ; Fri, 25 May 2018 06:12:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=6Izc5movT2lxdD5y5CtHVpwDbex1pTsZvvAwBSYatsM=; b=OEPWVLMnSonKtT+mLZ/fA6hVtIxclNQBoSXT+y9Q08qHZK9hDH6r7qSiNgCDmj3/En NRgnVhMfmOBul/HNzzm/r3RIQEuJ4Z5IWJzbaxlJOerRXml05r2obVvk0yn+LMwXX7Fi 8LSdE299IU5Q2Ao7KuK69tLhyzPsxgtGm80PQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=6Izc5movT2lxdD5y5CtHVpwDbex1pTsZvvAwBSYatsM=; b=tUufgABDsbSjW+VuWED1Vjee7SeHA50V8Z5hPODZmDsTeDcp+UpwZ3zwLgJrgM5J3Y 9FVHS9rtArxjPMMvd2Q7xx3boJfi5FgA+ChdjsNEDIGYt1ovOWe3eSvxsdmwPjBs0eE8 mA7GsD3AzF3BH0HCA2ritm3nIU4b9WE/+xslwb7zOdAYpZZDQYh1wGwChi3WvUIuGiuf ipOMnooREiEhmLaK4gk26qEhRzKs3xCwXISI+PYqEGPHl1G2jmeM1ciwxAVnclf5FxTz HhTYWNg+P6ymhaQCT+PCxQQ3plU1iMZYBcTMNVwXU26QYvshZrOeGA55/EaY3qpEckpf ue6Q== X-Gm-Message-State: ALKqPwc/tcXqYhJ1hU+0NS2KrYsCpsRM6vUn9KQ7H+d+Ko7SbfMhZIdU 5Xq7FicrHX8Q0jhM32VL5cfVzA== X-Received: by 2002:a1c:e046:: with SMTP id x67-v6mr1598226wmg.154.1527253970234; Fri, 25 May 2018 06:12:50 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:a860:64b4:335b:c763]) by smtp.gmail.com with ESMTPSA id 4-v6sm9690948wmg.8.2018.05.25.06.12.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 25 May 2018 06:12:49 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, rjw@rjwysocki.net Cc: juri.lelli@redhat.com, dietmar.eggemann@arm.com, Morten.Rasmussen@arm.com, viresh.kumar@linaro.org, valentin.schneider@arm.com, quentin.perret@arm.com, Vincent Guittot Subject: [PATCH v5 10/10] proc/sched: remove unused sched_time_avg_ms Date: Fri, 25 May 2018 15:12:31 +0200 Message-Id: <1527253951-22709-11-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> References: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org /proc/sys/kernel/sched_time_avg_ms entry is not used anywhere. Remove it Signed-off-by: Vincent Guittot --- include/linux/sched/sysctl.h | 1 - kernel/sched/core.c | 8 -------- kernel/sched/sched.h | 1 - kernel/sysctl.c | 8 -------- 4 files changed, 18 deletions(-) -- 2.7.4 diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h index 1c1a151..913488d 100644 --- a/include/linux/sched/sysctl.h +++ b/include/linux/sched/sysctl.h @@ -40,7 +40,6 @@ extern unsigned int sysctl_numa_balancing_scan_size; #ifdef CONFIG_SCHED_DEBUG extern __read_mostly unsigned int sysctl_sched_migration_cost; extern __read_mostly unsigned int sysctl_sched_nr_migrate; -extern __read_mostly unsigned int sysctl_sched_time_avg; int sched_proc_update_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 213d277..9894bc7 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -46,14 +46,6 @@ const_debug unsigned int sysctl_sched_features = const_debug unsigned int sysctl_sched_nr_migrate = 32; /* - * period over which we average the RT time consumption, measured - * in ms. - * - * default: 1s - */ -const_debug unsigned int sysctl_sched_time_avg = MSEC_PER_SEC; - -/* * period over which we measure -rt task CPU usage in us. * default: 1s */ diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 1929db7..5d55782 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1704,7 +1704,6 @@ extern void deactivate_task(struct rq *rq, struct task_struct *p, int flags); extern void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags); -extern const_debug unsigned int sysctl_sched_time_avg; extern const_debug unsigned int sysctl_sched_nr_migrate; extern const_debug unsigned int sysctl_sched_migration_cost; diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 6a78cf7..d77a959 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -368,14 +368,6 @@ static struct ctl_table kern_table[] = { .mode = 0644, .proc_handler = proc_dointvec, }, - { - .procname = "sched_time_avg_ms", - .data = &sysctl_sched_time_avg, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = proc_dointvec_minmax, - .extra1 = &one, - }, #ifdef CONFIG_SCHEDSTATS { .procname = "sched_schedstats",