From patchwork Tue Oct 9 16:24:56 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thara Gopinath X-Patchwork-Id: 148510 Delivered-To: patch@linaro.org Received: by 2002:a2e:8595:0:0:0:0:0 with SMTP id b21-v6csp5109613lji; Tue, 9 Oct 2018 09:25:12 -0700 (PDT) X-Google-Smtp-Source: ACcGV62FgBsiJLmYC5Z9SJz0X8rtiyJX8g6n2PgnjyUtgqUWpo4EaKpMnMVwiWGFcm/mlEYBjQe0 X-Received: by 2002:a63:141:: with SMTP id 62-v6mr25423207pgb.406.1539102312569; Tue, 09 Oct 2018 09:25:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539102312; cv=none; d=google.com; s=arc-20160816; b=AyyuXY75N336+p1rbIGHJvY4QRYCDInUNMbNgyh8ALvXd4szrTcHRremwAla/GxODb fsHsXOuYytKRaXUDjCBMcYxQg+ydYz2t/TW3xYo6HVzOY1m/e8iEwZKPm4pFcFr7zxOj HTh++ZMunGjoReDyMdeNL8I6TUc4s2wHKL6YCRsds6KZA5RUXRnTKZcgIBmRKHIAq4j8 pwMvzPKGkj4POlLf+sZvR0DSrhKLdBsGfEq3lLG+LjU1R9qb3Q49cvUG2YgbW3JiH5Vl bEwZqE5mXiKW/8qijCrv3dCuTHqy5UUnt2dmndXXTMOPeMXYOM60PDdF+HHVSCkSkErv LSYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=nAJdbw7cVJufaDU26DYt0Cn84YRW0q9zfDmKbAnQg00=; b=F2f+CJ3lkTRWaMAJY+fmDQxg76JrvV662E9E8VH2kBi93cVxj0J6jSfxZX5LGzVn/l 9brIR0wIPl5uEFOK9Ddecz2s6OQ+4XxxieF7HG/x8nJ4+KJr8KxMKlbdLdSv9wV2VocP ywLpvEuBrzKvNdSKAhoVF4RfdRdV7uzCwBdK8c4P89EgQL+8iwhk9GdKjcDQIdTCgc9j +GfAlP0II9PNPD5GO3RKnV1Bfe8IoPGagF6bI2LVexegs14usZ8XYm4XAqGqXplEEyoT 0x3qQlAce8k+czqnA2eBtI1wZPsCSQKo+pBbr82xUeKqymJ2tS4s+XdYD/LQROViteon rdTw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=fy7kIM+f; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 8-v6si23097558pfx.185.2018.10.09.09.25.12; Tue, 09 Oct 2018 09:25:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=fy7kIM+f; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727254AbeJIXmy (ORCPT + 32 others); Tue, 9 Oct 2018 19:42:54 -0400 Received: from mail-qt1-f196.google.com ([209.85.160.196]:38613 "EHLO mail-qt1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726925AbeJIXmv (ORCPT ); Tue, 9 Oct 2018 19:42:51 -0400 Received: by mail-qt1-f196.google.com with SMTP id l9-v6so2300800qtf.5 for ; Tue, 09 Oct 2018 09:25:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=nAJdbw7cVJufaDU26DYt0Cn84YRW0q9zfDmKbAnQg00=; b=fy7kIM+fgL9ZvKDy66WMaAC9Wc/4RpI9FTvgC8ZckGaXt71Bledbyz1ZX6lTq176Jo 64y8pUfFC/fEu3WI5sDQ7mG+PAvywzM3cJRcQftrsyEZXTB+ULmYlbLWH2In+T80BvJ/ xKiREcpFn14bvqEGKsDe08LQZsYTbcaunNqEk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=nAJdbw7cVJufaDU26DYt0Cn84YRW0q9zfDmKbAnQg00=; b=lobQ/E8jKPHnah8RqOz+z7M1PxJqcVVrkGtL9ceIM327XVy4bax1vOMe28Cm+xFdGi DL7EfLNq5bx13KlrgRnvldYlZsfax6Kq8LyNZ9QTZDUwWUn/OOtxHEehDEm0S8QLPZcX GbXJbCob09eu9guszKaGuFIPxHF2C64w8BGcOJlKRm/tj8wufs1QuLAQku/+COnMR7lN w6sVmH5JowLPb6JmUrP8B8UdS9ituhr++cwatkue3NuQCejAOKWJiuJJj4PK7yiJNPZz 0vjJY1LzZyM8Wjtxd4mwNbygqWZC83ZLXeA6HXyt+nnSKk5rIQDM6VwuK2+rJdDlvf69 +UMg== X-Gm-Message-State: ABuFfojZv9qjrqWG1Y3WIPKIKMFkn8ryRi/dI32lQGGZ8p5D8te9gGpy zIjc/jkfCPxwu/d7Wu5okAZgchy3rH6MDg== X-Received: by 2002:a05:6214:1091:: with SMTP id o17mr19318832qvr.97.1539102306927; Tue, 09 Oct 2018 09:25:06 -0700 (PDT) Received: from Thara-Work-Ubuntu.fios-router.home (pool-71-255-245-97.washdc.fios.verizon.net. [71.255.245.97]) by smtp.googlemail.com with ESMTPSA id o7-v6sm10441169qkc.67.2018.10.09.09.25.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 09 Oct 2018 09:25:06 -0700 (PDT) From: Thara Gopinath To: linux-kernel@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, rui.zhang@intel.com Cc: gregkh@linuxfoundation.org, rafael@kernel.org, amit.kachhap@gmail.com, viresh.kumar@linaro.org, javi.merino@kernel.org, edubezval@gmail.com, daniel.lezcano@linaro.org, linux-pm@vger.kernel.org, quentin.perret@arm.com, ionela.voinescu@arm.com, vincent.guittot@linaro.org Subject: [RFC PATCH 1/7] sched/pelt.c: Add option to make load and util calculations frequency invariant Date: Tue, 9 Oct 2018 12:24:56 -0400 Message-Id: <1539102302-9057-2-git-send-email-thara.gopinath@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1539102302-9057-1-git-send-email-thara.gopinath@linaro.org> References: <1539102302-9057-1-git-send-email-thara.gopinath@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add an additional parametr in accumulate_sum to allow optional frequency adjustment of load and utilization. When considering rt/dl load/util, it is correct to scale it to the current cpu frequency. On the other hand, thermal pressure(max capped frequency) is frequency invariant. Signed-off-by: Thara Gopinath --- kernel/sched/pelt.c | 26 +++++++++++++++----------- 1 file changed, 15 insertions(+), 11 deletions(-) -- 2.1.4 diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c index 35475c0..05b8798 100644 --- a/kernel/sched/pelt.c +++ b/kernel/sched/pelt.c @@ -107,7 +107,8 @@ static u32 __accumulate_pelt_segments(u64 periods, u32 d1, u32 d3) */ static __always_inline u32 accumulate_sum(u64 delta, int cpu, struct sched_avg *sa, - unsigned long load, unsigned long runnable, int running) + unsigned long load, unsigned long runnable, int running, + int freq_adjusted) { unsigned long scale_freq, scale_cpu; u32 contrib = (u32)delta; /* p == 0 -> delta < 1024 */ @@ -137,7 +138,8 @@ accumulate_sum(u64 delta, int cpu, struct sched_avg *sa, } sa->period_contrib = delta; - contrib = cap_scale(contrib, scale_freq); + if (freq_adjusted) + contrib = cap_scale(contrib, scale_freq); if (load) sa->load_sum += load * contrib; if (runnable) @@ -178,7 +180,8 @@ accumulate_sum(u64 delta, int cpu, struct sched_avg *sa, */ static __always_inline int ___update_load_sum(u64 now, int cpu, struct sched_avg *sa, - unsigned long load, unsigned long runnable, int running) + unsigned long load, unsigned long runnable, int running, + int freq_adjusted) { u64 delta; @@ -221,7 +224,8 @@ ___update_load_sum(u64 now, int cpu, struct sched_avg *sa, * Step 1: accumulate *_sum since last_update_time. If we haven't * crossed period boundaries, finish. */ - if (!accumulate_sum(delta, cpu, sa, load, runnable, running)) + if (!accumulate_sum(delta, cpu, sa, load, runnable, running, + freq_adjusted)) return 0; return 1; @@ -272,7 +276,7 @@ int __update_load_avg_blocked_se(u64 now, int cpu, struct sched_entity *se) if (entity_is_task(se)) se->runnable_weight = se->load.weight; - if (___update_load_sum(now, cpu, &se->avg, 0, 0, 0)) { + if (___update_load_sum(now, cpu, &se->avg, 0, 0, 0, 1)) { ___update_load_avg(&se->avg, se_weight(se), se_runnable(se)); return 1; } @@ -286,7 +290,7 @@ int __update_load_avg_se(u64 now, int cpu, struct cfs_rq *cfs_rq, struct sched_e se->runnable_weight = se->load.weight; if (___update_load_sum(now, cpu, &se->avg, !!se->on_rq, !!se->on_rq, - cfs_rq->curr == se)) { + cfs_rq->curr == se, 1)) { ___update_load_avg(&se->avg, se_weight(se), se_runnable(se)); cfs_se_util_change(&se->avg); @@ -301,7 +305,7 @@ int __update_load_avg_cfs_rq(u64 now, int cpu, struct cfs_rq *cfs_rq) if (___update_load_sum(now, cpu, &cfs_rq->avg, scale_load_down(cfs_rq->load.weight), scale_load_down(cfs_rq->runnable_weight), - cfs_rq->curr != NULL)) { + cfs_rq->curr != NULL, 1)) { ___update_load_avg(&cfs_rq->avg, 1, 1); return 1; @@ -326,7 +330,7 @@ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running) if (___update_load_sum(now, rq->cpu, &rq->avg_rt, running, running, - running)) { + running, 1)) { ___update_load_avg(&rq->avg_rt, 1, 1); return 1; @@ -349,7 +353,7 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running) if (___update_load_sum(now, rq->cpu, &rq->avg_dl, running, running, - running)) { + running, 1)) { ___update_load_avg(&rq->avg_dl, 1, 1); return 1; @@ -385,11 +389,11 @@ int update_irq_load_avg(struct rq *rq, u64 running) ret = ___update_load_sum(rq->clock - running, rq->cpu, &rq->avg_irq, 0, 0, - 0); + 0, 1); ret += ___update_load_sum(rq->clock, rq->cpu, &rq->avg_irq, 1, 1, - 1); + 1, 1); if (ret) ___update_load_avg(&rq->avg_irq, 1, 1);