From patchwork Fri Apr 28 14:23:55 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 98365 Delivered-To: patch@linaro.org Received: by 10.140.109.52 with SMTP id k49csp289916qgf; Fri, 28 Apr 2017 07:24:29 -0700 (PDT) X-Received: by 10.98.202.80 with SMTP id n77mr12725254pfg.158.1493389469155; Fri, 28 Apr 2017 07:24:29 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f7si6430022pln.39.2017.04.28.07.24.28; Fri, 28 Apr 2017 07:24:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S969494AbdD1OYU (ORCPT + 25 others); Fri, 28 Apr 2017 10:24:20 -0400 Received: from mail-wm0-f52.google.com ([74.125.82.52]:34575 "EHLO mail-wm0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S967134AbdD1OYJ (ORCPT ); Fri, 28 Apr 2017 10:24:09 -0400 Received: by mail-wm0-f52.google.com with SMTP id r190so11765250wme.1 for ; Fri, 28 Apr 2017 07:24:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=b+RINhoiowacnjPLsW+UDqGmw1ypEEPqzD2V9CDySV8=; b=htT5b77YhxkvmvLKKYcfAqlF6cF7HCYr5VkGH1ZDLB3VRa4AEc3P/LOkM9out67Dau FhWlH8yO+cSls4Y/RNGkzvS/34EBxuH5IlnSsJ7tLU/8RxccvPKhxsolCgrCAWTsGzVC zkqNs2UwIkqCkPIqAaNA/K7wzNV1bNzPDb+mg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=b+RINhoiowacnjPLsW+UDqGmw1ypEEPqzD2V9CDySV8=; b=SwB+XGmuUBsGYCzBlC0o9rbSX1IUwZnFc4+1zIPv9NMFLuHavQ9Ss7fwR45ALgs378 6Xor9If3w0NBh+NLcvEldBrwyUgNu0ZA/stVB4GZxaXXYHY+7i8Endm+sV7Fg8z0OJeC tBjfc1k5Og6bkVWxqO8dyzkwEJIn6Z2OOXBOjNdL65kQv1SIz8grwE8rowIlJWsFPh60 8rGhe6y228w0Og+MMRmEclD5dLI+hs2GfodyGwZZXJlv/sLC6KvmP+q0MZ988oNZ59d3 7dw47T7B1ngNA99fxL3J9kiPYl8FxSh+q4W2D9NhoMlPTwudC9z4v/Bm5iIYJZzhLgJX 0RNA== X-Gm-Message-State: AN3rC/7gqacWFmYC8GOwkvd9xkwKd7XoYZNVbmOBMeNtd8Zbu7NYt43C 4Tfa3RJQKGqZ0Wzs X-Received: by 10.28.126.81 with SMTP id z78mr5501333wmc.26.1493389442440; Fri, 28 Apr 2017 07:24:02 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:88b6:6616:fb88:121d]) by smtp.gmail.com with ESMTPSA id c128sm2926260wmh.32.2017.04.28.07.24.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 28 Apr 2017 07:24:01 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org Cc: dietmar.eggemann@arm.com, Morten.Rasmussen@arm.com, yuyang.du@intel.com, pjt@google.com, bsegall@google.com, Vincent Guittot Subject: [PATCH v3] sched/fair: update scale invariance of PELT Date: Fri, 28 Apr 2017 16:23:55 +0200 Message-Id: <1493389435-2525-1-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The current implementation of load tracking invariance scales the contribution with current frequency and uarch performance (only for utilization) of the CPU. One main result of this formula is that the figures are capped by current capacity of CPU. Another one is that the load_avg is not invariant because not scaled with uarch. The util_avg of a periodic task that runs r time slots every p time slots varies in the range : U * (1-y^r)/(1-y^p) * y^i < Utilization < U * (1-y^r)/(1-y^p) with U is the max util_avg value = SCHED_CAPACITY_SCALE At a lower capacity, the range becomes: U * C * (1-y^r')/(1-y^p) * y^i' < Utilization < U * C * (1-y^r')/(1-y^p) with C reflecting the compute capacity ratio between current capacity and max capacity. so C tries to compensate changes in (1-y^r') but it can't be accurate. Instead of scaling the contribution value of PELT algo, we should scale the running time. The PELT signal aims to track the amount of computation of tasks and/or rq so it seems more correct to scale the running time to reflect the effective amount of computation done since the last update. In order to be fully invariant, we need to apply the same amount of running time and idle time whatever the current capacity. Because running at lower capacity implies that the task will run longer, we have to track the amount of "stolen" idle time and to apply it when task becomes idle. But once we have reached the maximum utilization value (SCHED_CAPACITY_SCALE), it means that the task is seen as an always-running task whatever the capacity of the cpu (even at max compute capacity). In this case, we can discard the "stolen" idle times which becomes meaningless. In order to cope with rounding effect of PELT algo we take a margin and consider task with utilization greater than 1000 (vs 1024 max) as an always-running task. Then, we can use the same algorithm for both utilization and load and simplify __update_load_avg now that the load of a task doesn't have to be capped by CPU uarch. The responsivness of PELT is improved when CPU is not running at max capacity with this new algorithm. I have put below some examples of duration to reach some typical load values according to the capacity of the CPU with current implementation and with this patch. Util (%) max capacity half capacity(mainline) half capacity(w/ patch) 972 (95%) 138ms not reachable 276ms 486 (47.5%) 30ms 138ms 60ms 256 (25%) 13ms 32ms 26ms On my hikey (octo ARM platform) with schedutil governor, the time to reach max OPP when starting from a null utilization, decreases from 223ms with current scale invariance down to 121ms with the new algorithm. For this test, i have enable arch_scale_freq for arm64. Signed-off-by: Vincent Guittot --- Change since v3 - Add comments - With patch ("sched/cfs: make util/load_avg more stable"), utilization stays stable when reaching max value. Removed margin used to detect always running task include/linux/sched.h | 1 + kernel/sched/fair.c | 76 ++++++++++++++++++++++++++++++++++++++++++++++----- 2 files changed, 70 insertions(+), 7 deletions(-) -- 2.7.4 diff --git a/include/linux/sched.h b/include/linux/sched.h index 0978fb7..f8dde36 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -313,6 +313,7 @@ struct load_weight { */ struct sched_avg { u64 last_update_time; + u64 stolen_idle_time; u64 load_sum; u32 util_sum; u32 period_contrib; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a903276..8b036f1 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -729,6 +729,7 @@ void init_entity_runnable_average(struct sched_entity *se) struct sched_avg *sa = &se->avg; sa->last_update_time = 0; + sa->stolen_idle_time = 0; /* * sched_avg's period_contrib should be strictly less then 1024, so * we give it 1023 to make sure it is almost a period (1024us), and @@ -2804,15 +2805,12 @@ static u32 __accumulate_pelt_segments(u64 periods, u32 d1, u32 d3) * n=1 */ static __always_inline u32 -accumulate_sum(u64 delta, int cpu, struct sched_avg *sa, +accumulate_sum(u64 delta, struct sched_avg *sa, unsigned long weight, int running, struct cfs_rq *cfs_rq) { - unsigned long scale_freq, scale_cpu; u32 contrib = (u32)delta; /* p == 0 -> delta < 1024 */ u64 periods; - scale_freq = arch_scale_freq_capacity(NULL, cpu); - scale_cpu = arch_scale_cpu_capacity(NULL, cpu); delta += sa->period_contrib; periods = delta / 1024; /* A period is 1024us (~1ms) */ @@ -2837,19 +2835,77 @@ accumulate_sum(u64 delta, int cpu, struct sched_avg *sa, } sa->period_contrib = delta; - contrib = cap_scale(contrib, scale_freq); if (weight) { sa->load_sum += weight * contrib; if (cfs_rq) cfs_rq->runnable_load_sum += weight * contrib; } if (running) - sa->util_sum += contrib * scale_cpu; + sa->util_sum += contrib << SCHED_CAPACITY_SHIFT; return periods; } /* + * Scale the time to reflect the effective amount of computation done during + * this delta time. + */ +static __always_inline u64 +scale_time(u64 delta, int cpu, struct sched_avg *sa, + unsigned long weight, int running) +{ + if (running) { + /* + * When an entity runs at a lower compute capacity, it will + * need more time to do the same amount of work than at max + * capacity. In order to be invariant, we scale the delta to + * reflect how much work has been really done. + * Running at lower capacity also means running longer to do + * the same amount of work and this results in stealing some + * idle time that will disturbed the load signal compared to + * max capacity; We also track this amount of stolen time to + * reflect it when the entity will go back to sleep. + * + * stolen time = (current run time) - (effective time at max + * capacity) + */ + sa->stolen_idle_time += delta; + + /* + * scale the elapsed time to reflect the real amount of + * computation + */ + delta = cap_scale(delta, arch_scale_freq_capacity(NULL, cpu)); + delta = cap_scale(delta, arch_scale_cpu_capacity(NULL, cpu)); + + /* + * Track the amount of stolen idle time due to running at + * lower capacity + */ + sa->stolen_idle_time -= delta; + } else if (!weight) { + /* + * Entity is sleeping so both utilization and load will decay + * and we can safely add the stolen time. Reflecting some + * stolen time make sense only if this idle phase would be + * present at max capacity. As soon as the utilization of an + * entity has reached the maximum value, it is considered as + * an always runnnig entity without idle time to steal. + */ + if (sa->util_avg < (SCHED_CAPACITY_SCALE - 1)) { + /* + * Add the idle time stolen by running at lower compute + * capacity + */ + delta += sa->stolen_idle_time; + } + sa->stolen_idle_time = 0; + } + + return delta; +} + +/* * We can represent the historical contribution to runnable average as the * coefficients of a geometric series. To do this we sub-divide our runnable * history into segments of approximately 1ms (1024us); label the segment that @@ -2904,13 +2960,19 @@ ___update_load_avg(u64 now, int cpu, struct sched_avg *sa, sa->last_update_time += delta << 10; /* + * Scale time to reflect the amount a computation effectively done + * during the time slot at current capacity + */ + delta = scale_time(delta, cpu, sa, weight, running); + + /* * Now we know we crossed measurement unit boundaries. The *_avg * accrues by two steps: * * Step 1: accumulate *_sum since last_update_time. If we haven't * crossed period boundaries, finish. */ - if (!accumulate_sum(delta, cpu, sa, weight, running, cfs_rq)) + if (!accumulate_sum(delta, sa, weight, running, cfs_rq)) return 0; /*