From patchwork Wed Dec 9 06:19:30 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Muckle X-Patchwork-Id: 57926 Delivered-To: patch@linaro.org Received: by 10.112.147.194 with SMTP id tm2csp480050lbb; Tue, 8 Dec 2015 22:20:44 -0800 (PST) X-Received: by 10.66.221.42 with SMTP id qb10mr5569353pac.51.1449642040883; Tue, 08 Dec 2015 22:20:40 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z69si10390858pfi.42.2015.12.08.22.20.40; Tue, 08 Dec 2015 22:20:40 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org; dkim=neutral (body hash did not verify) header.i=@linaro-org.20150623.gappssmtp.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752166AbbLIGUh (ORCPT + 11 others); Wed, 9 Dec 2015 01:20:37 -0500 Received: from mail-pf0-f170.google.com ([209.85.192.170]:34502 "EHLO mail-pf0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752383AbbLIGTt (ORCPT ); Wed, 9 Dec 2015 01:19:49 -0500 Received: by pfbg73 with SMTP id g73so25317489pfb.1 for ; Tue, 08 Dec 2015 22:19:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=0/OgTZ/AS4N/PP1x4Qf93xcdGi4sTTGYv/T+r6v4I28=; b=YknKofLQhl1BjxNbD6+0QuwgzYrP00LiEdOOVC8OdBpYU/ungxnu9dhce3O29RHJgf DEdOQ8uEx5uQMEqE00Itb2U9Mt+C89W0I6CZO7XCx/iCLy10z5pflOW2pIDsQ12+YxpI YyzCIBGBtLVu8hTeolqACK1W6OJW9SUL5AUz051Y2IBf2FhFE7tkD9xCJMuHrp+GDeSv 0Y4B7SV3swf7/VIsJVlBVi7ryFikUPo/0sivV3O0BEtUdX3HF8XuFfDBZBEJiPEyOekJ 5C0P0R8EM8x1WR/oFoEojQ9YYBgD3JvAYA7MG7nSAPA5Fbgu+jGYsQykbNv/Z5NBUTjm HWmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=0/OgTZ/AS4N/PP1x4Qf93xcdGi4sTTGYv/T+r6v4I28=; b=jztzKk2hjg5GtaEiI9ZL2VYOOo5FcJ7L/QM9Tu/9YHCf4b/CNteqBQTMVY9xoaLkJs g9aC8Mvo3Ou6dj+7XLlFaF2khDCRXtyBqCYCe6LZCKEYc0k/JWgwViOdjHNf7lLTQ/+Z otsasp4YKprY5CUondHQDshY4QTXHh/KPnK+3S5LidP4kr8GUOXUGqCuBBBvp0ZsP46Z Cw4/yTJ3a9Z6iGze6x5sfLxOy/TaOZOJFWjDbIMwSkwYhJKleiXhR6pVZCdYtojHOSwm eRsdo8tXEeJNve67bFGIFmjPL/XrI880ddTyhN7v2RFD+QUn/J6BJeeBH2ZRD2yZwSDl RvVw== X-Gm-Message-State: ALoCoQnqfMmhoE7XV1/A6bHkhClYJfV47iJ9dGRaLcSwSVFvyzwId6k7dCIbwrvxXpVztCQzppkaQP0j9fqrTbYxQWxcMnW00A== X-Received: by 10.98.67.9 with SMTP id q9mr10665389pfa.138.1449641989286; Tue, 08 Dec 2015 22:19:49 -0800 (PST) Received: from graphite.smuckle.net (cpe-75-80-155-7.san.res.rr.com. [75.80.155.7]) by smtp.gmail.com with ESMTPSA id l84sm8643078pfb.15.2015.12.08.22.19.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 08 Dec 2015 22:19:48 -0800 (PST) From: Steve Muckle X-Google-Original-From: Steve Muckle To: Peter Zijlstra , Ingo Molnar Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Vincent Guittot , Morten Rasmussen , Dietmar Eggemann , Juri Lelli , Patrick Bellasi , Michael Turquette Subject: [RFCv6 PATCH 09/10] sched: deadline: use deadline bandwidth in scale_rt_capacity Date: Tue, 8 Dec 2015 22:19:30 -0800 Message-Id: <1449641971-20827-10-git-send-email-smuckle@linaro.org> X-Mailer: git-send-email 2.4.10 In-Reply-To: <1449641971-20827-1-git-send-email-smuckle@linaro.org> References: <1449641971-20827-1-git-send-email-smuckle@linaro.org> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Vincent Guittot Instead of monitoring the exec time of deadline tasks to evaluate the CPU capacity consumed by deadline scheduler class, we can directly calculate it thanks to the sum of utilization of deadline tasks on the CPU. We can remove deadline tasks from rt_avg metric and directly use the average bandwidth of deadline scheduler in scale_rt_capacity. Based in part on a similar patch from Luca Abeni . Signed-off-by: Vincent Guittot Signed-off-by: Steve Muckle --- kernel/sched/deadline.c | 33 +++++++++++++++++++++++++++++++-- kernel/sched/fair.c | 8 ++++++++ kernel/sched/sched.h | 2 ++ 3 files changed, 41 insertions(+), 2 deletions(-) -- 2.4.10 -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 8b0a15e..9d9eb50 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -43,6 +43,24 @@ static inline int on_dl_rq(struct sched_dl_entity *dl_se) return !RB_EMPTY_NODE(&dl_se->rb_node); } +static void add_average_bw(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq) +{ + u64 se_bw = dl_se->dl_bw; + + dl_rq->avg_bw += se_bw; +} + +static void clear_average_bw(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq) +{ + u64 se_bw = dl_se->dl_bw; + + dl_rq->avg_bw -= se_bw; + if (dl_rq->avg_bw < 0) { + WARN_ON(1); + dl_rq->avg_bw = 0; + } +} + static inline int is_leftmost(struct task_struct *p, struct dl_rq *dl_rq) { struct sched_dl_entity *dl_se = &p->dl; @@ -494,6 +512,9 @@ static void update_dl_entity(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq = dl_rq_of_se(dl_se); struct rq *rq = rq_of_dl_rq(dl_rq); + if (dl_se->dl_new) + add_average_bw(dl_se, dl_rq); + /* * The arrival of a new instance needs special treatment, i.e., * the actual scheduling parameters have to be "renewed". @@ -741,8 +762,6 @@ static void update_curr_dl(struct rq *rq) curr->se.exec_start = rq_clock_task(rq); cpuacct_charge(curr, delta_exec); - sched_rt_avg_update(rq, delta_exec); - dl_se->runtime -= dl_se->dl_yielded ? 0 : delta_exec; if (dl_runtime_exceeded(dl_se)) { dl_se->dl_throttled = 1; @@ -1241,6 +1260,8 @@ static void task_fork_dl(struct task_struct *p) static void task_dead_dl(struct task_struct *p) { struct dl_bw *dl_b = dl_bw_of(task_cpu(p)); + struct dl_rq *dl_rq = dl_rq_of_se(&p->dl); + struct rq *rq = rq_of_dl_rq(dl_rq); /* * Since we are TASK_DEAD we won't slip out of the domain! @@ -1249,6 +1270,8 @@ static void task_dead_dl(struct task_struct *p) /* XXX we should retain the bw until 0-lag */ dl_b->total_bw -= p->dl.dl_bw; raw_spin_unlock_irq(&dl_b->lock); + + clear_average_bw(&p->dl, &rq->dl); } static void set_curr_task_dl(struct rq *rq) @@ -1556,7 +1579,9 @@ retry: } deactivate_task(rq, next_task, 0); + clear_average_bw(&next_task->dl, &rq->dl); set_task_cpu(next_task, later_rq->cpu); + add_average_bw(&next_task->dl, &later_rq->dl); activate_task(later_rq, next_task, 0); ret = 1; @@ -1644,7 +1669,9 @@ static void pull_dl_task(struct rq *this_rq) resched = true; deactivate_task(src_rq, p, 0); + clear_average_bw(&p->dl, &src_rq->dl); set_task_cpu(p, this_cpu); + add_average_bw(&p->dl, &this_rq->dl); activate_task(this_rq, p, 0); dmin = p->dl.deadline; @@ -1750,6 +1777,8 @@ static void switched_from_dl(struct rq *rq, struct task_struct *p) if (!start_dl_timer(p)) __dl_clear_params(p); + clear_average_bw(&p->dl, &rq->dl); + /* * Since this might be the only -deadline task on the rq, * this is the right place to try to pull some other one diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 4c49f76..ce05f61 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6203,6 +6203,14 @@ static unsigned long scale_rt_capacity(int cpu) used = div_u64(avg, total); + /* + * deadline bandwidth is defined at system level so we must + * weight this bandwidth with the max capacity of the system. + * As a reminder, avg_bw is 20bits width and + * scale_cpu_capacity is 10 bits width + */ + used += div_u64(rq->dl.avg_bw, arch_scale_cpu_capacity(NULL, cpu)); + if (likely(used < SCHED_CAPACITY_SCALE)) return SCHED_CAPACITY_SCALE - used; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 08858d1..e44c6be 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -519,6 +519,8 @@ struct dl_rq { #else struct dl_bw dl_bw; #endif + /* This is the "average utilization" for this runqueue */ + s64 avg_bw; }; #ifdef CONFIG_SMP