From patchwork Fri May 25 13:12:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 136869 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp3575489lji; Fri, 25 May 2018 06:14:42 -0700 (PDT) X-Google-Smtp-Source: AB8JxZp2P4ZQL9ailQ2LITr5fnWpONWePZ+kE7Kq7DTwLwYdMIAajBtCe2omz2/Sab1c0Z0mBM73 X-Received: by 2002:a62:478d:: with SMTP id p13-v6mr2562015pfi.164.1527254082800; Fri, 25 May 2018 06:14:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527254082; cv=none; d=google.com; s=arc-20160816; b=YHDusr7lktQOANEmebGRAELh3gzMRwcspFX8GD7VTpKc6rBJPKOKKDyaRGXUIHEje3 Ymr+67HHtAvcDv7bi7dlXQUXWgA1vjDRsC9W8IOxQDq8OBHweRIxzKAwmQCP0WVxPLRh yydQkyYrXlGZCeW0b9ba13Y5UtMWLcD+S5pMF5ZNHBd45f5ORihrVhniIUOQKabgNQmt xwFdFuE9LyBr2K44bxJ1cJOmLtUPUxu2aqc/xnbrtUCvt7Jcr2E8ET4KkXbGuTUJoNSh gzm/3C2qwixoEhAzKJkLaFJ9Kx1BCf6MM2ojMWbqF3rylIJHMCiuwC5Mans+yFqoLx8z E86Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=MHe4KlnhyU3WLyiAf1k6Cbbl6wOSZJM4FxHQHlPFzHc=; b=0cfR/d4hsVrAssYgM9yD6+3UA42pRkzVUZVHhP2YsGuo3FygMgQx/o7JntVi9NoScR QLW762JXr7Spg+hSd25TK/t6i0XWWkhCL2oKCaC75Lv7bOSHOX4Qo/2JtXtRcT+BoM44 RInDGASz7aUZ0qJ3ZJ5FTNoxFRx9jYCKEUwCpyLkpa/6VvpLAz4Z1v84LVY0Navsry+u CLvHmMTd+77EAzvrLoSKAgh0yM/leQE3VbB0qcxEOfBBHVoz+1sQgKsfQCOMwwwDml0F EzteOcS0y9ImUToHJrl8qpIYOHF4AQrpCL0YaJBvLkWM8vQTqk1VANX2PDet2xwSK3c0 TtPg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=N2ieL8x6; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 24-v6si23684865pfr.242.2018.05.25.06.14.42; Fri, 25 May 2018 06:14:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=N2ieL8x6; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936160AbeEYNOl (ORCPT + 30 others); Fri, 25 May 2018 09:14:41 -0400 Received: from mail-wm0-f68.google.com ([74.125.82.68]:37120 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934540AbeEYNMp (ORCPT ); Fri, 25 May 2018 09:12:45 -0400 Received: by mail-wm0-f68.google.com with SMTP id l1-v6so14539351wmb.2 for ; Fri, 25 May 2018 06:12:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=MHe4KlnhyU3WLyiAf1k6Cbbl6wOSZJM4FxHQHlPFzHc=; b=N2ieL8x6tegBZCMjmr/iK3GHWkioo5feViuBmzfo1q4Ag6lebeQXBJtccTQIzTRNkm OWc+JPjPg05tTjgbYXAZEvtUR/4N2A9xQYoWgM5lHJTZXK2wMUnKleQ7LKMdGuG6JFrp v0Nep2xlocXdRHZwwJ65lNFuB2r0JOVkEC4gM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=MHe4KlnhyU3WLyiAf1k6Cbbl6wOSZJM4FxHQHlPFzHc=; b=iB9gZb6Wi1wFxYrZ2q8ulNwV0/oODqtqcJBLsZSxWr6xIPD64/h6qH/ec/f6cGUyF4 ExQ3Mi7crr2fVWi/SfLdL6Plt06V1MuB7yDvwOqkVZyUiOsiWAaomyt/h7TzItMF0W9r eUh4+LrOxGneZcWP1lpNRXEbEN7KKs5p+IhXIL5fIjNzlNfgDgAssPhteh9eQ96waFWR GqEeg0PHrhaodULtk+14fJz52ifBS6iQ7E5RsCicPdPGr3FOCLiqwOxza1bt0SPmuMJS KQp5Szq3IQW+8u9cTYTjJ5D/QMHhvy7S+MaHhTFW10dVHnbWSHKWMLUgqR7N8GeZUq73 8zHw== X-Gm-Message-State: ALKqPwcb/fMpQzNcC9BQ6wxNHRLeY9M/SDGP3I0WKVa+S3JNLoUK5DcS kkm1J1sCJOuRkKrUDdD3cGxNV9eg4pQ= X-Received: by 2002:a1c:8f8f:: with SMTP id r137-v6mr1628057wmd.103.1527253963922; Fri, 25 May 2018 06:12:43 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:a860:64b4:335b:c763]) by smtp.gmail.com with ESMTPSA id 4-v6sm9690948wmg.8.2018.05.25.06.12.42 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 25 May 2018 06:12:43 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, rjw@rjwysocki.net Cc: juri.lelli@redhat.com, dietmar.eggemann@arm.com, Morten.Rasmussen@arm.com, viresh.kumar@linaro.org, valentin.schneider@arm.com, quentin.perret@arm.com, Vincent Guittot Subject: [PATCH v5 04/10] sched/dl: add dl_rq utilization tracking Date: Fri, 25 May 2018 15:12:25 +0200 Message-Id: <1527253951-22709-5-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> References: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Similarly to what happens with rt tasks, cfs tasks can be preempted by dl tasks and the cfs's utilization might no longer describes the real utilization level. Current dl bandwidth reflects the requirements to meet deadline when tasks are enqueued but not the current utilization of the dl sched class. We track dl class utilization to estimate the system utilization. Signed-off-by: Vincent Guittot --- kernel/sched/deadline.c | 5 +++++ kernel/sched/fair.c | 11 ++++++++--- kernel/sched/pelt.c | 23 +++++++++++++++++++++++ kernel/sched/pelt.h | 6 ++++++ kernel/sched/sched.h | 1 + 5 files changed, 43 insertions(+), 3 deletions(-) -- 2.7.4 diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 1356afd..950b3fb 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -16,6 +16,7 @@ * Fabio Checconi */ #include "sched.h" +#include "pelt.h" struct dl_bandwidth def_dl_bandwidth; @@ -1761,6 +1762,8 @@ pick_next_task_dl(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) deadline_queue_push_tasks(rq); + update_dl_rq_load_avg(rq_clock_task(rq), rq, + rq->curr->sched_class == &dl_sched_class); return p; } @@ -1768,6 +1771,7 @@ static void put_prev_task_dl(struct rq *rq, struct task_struct *p) { update_curr_dl(rq); + update_dl_rq_load_avg(rq_clock_task(rq), rq, 1); if (on_dl_rq(&p->dl) && p->nr_cpus_allowed > 1) enqueue_pushable_dl_task(rq, p); } @@ -1784,6 +1788,7 @@ static void task_tick_dl(struct rq *rq, struct task_struct *p, int queued) { update_curr_dl(rq); + update_dl_rq_load_avg(rq_clock_task(rq), rq, 1); /* * Even when we have runtime, update_curr_dl() might have resulted in us * not being the leftmost task anymore. In that case NEED_RESCHED will diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fb18bcc..967e873 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7290,11 +7290,14 @@ static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq) return false; } -static inline bool rt_rq_has_blocked(struct rq *rq) +static inline bool others_rqs_have_blocked(struct rq *rq) { if (rq->avg_rt.util_avg) return true; + if (rq->avg_dl.util_avg) + return true; + return false; } @@ -7358,8 +7361,9 @@ static void update_blocked_averages(int cpu) done = false; } update_rt_rq_load_avg(rq_clock_task(rq), rq, 0); + update_dl_rq_load_avg(rq_clock_task(rq), rq, 0); /* Don't need periodic decay once load/util_avg are null */ - if (rt_rq_has_blocked(rq)) + if (others_rqs_have_blocked(rq)) done = false; #ifdef CONFIG_NO_HZ_COMMON @@ -7427,9 +7431,10 @@ static inline void update_blocked_averages(int cpu) update_rq_clock(rq); update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq); update_rt_rq_load_avg(rq_clock_task(rq), rq, 0); + update_dl_rq_load_avg(rq_clock_task(rq), rq, 0); #ifdef CONFIG_NO_HZ_COMMON rq->last_blocked_load_update_tick = jiffies; - if (!cfs_rq_has_blocked(cfs_rq) && !rt_rq_has_blocked(rq)) + if (!cfs_rq_has_blocked(cfs_rq) && !others_rqs_have_blocked(rq)) rq->has_blocked_load = 0; #endif rq_unlock_irqrestore(rq, &rf); diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c index 213b922..b07db80 100644 --- a/kernel/sched/pelt.c +++ b/kernel/sched/pelt.c @@ -332,3 +332,26 @@ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running) return 0; } + +/* + * dl_rq: + * + * util_sum = \Sum se->avg.util_sum but se->avg.util_sum is not tracked + * util_sum = cpu_scale * load_sum + * runnable_load_sum = load_sum + * + */ + +int update_dl_rq_load_avg(u64 now, struct rq *rq, int running) +{ + if (___update_load_sum(now, rq->cpu, &rq->avg_dl, + running, + running, + running)) { + + ___update_load_avg(&rq->avg_dl, 1, 1); + return 1; + } + + return 0; +} diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h index b2983b7..0e4f912 100644 --- a/kernel/sched/pelt.h +++ b/kernel/sched/pelt.h @@ -4,6 +4,7 @@ int __update_load_avg_blocked_se(u64 now, int cpu, struct sched_entity *se); int __update_load_avg_se(u64 now, int cpu, struct cfs_rq *cfs_rq, struct sched_entity *se); int __update_load_avg_cfs_rq(u64 now, int cpu, struct cfs_rq *cfs_rq); int update_rt_rq_load_avg(u64 now, struct rq *rq, int running); +int update_dl_rq_load_avg(u64 now, struct rq *rq, int running); /* * When a task is dequeued, its estimated utilization should not be update if @@ -45,6 +46,11 @@ update_rt_rq_load_avg(u64 now, struct rq *rq, int running) return 0; } +static inline int +update_dl_rq_load_avg(u64 now, struct rq *rq, int running) +{ + return 0; +} #endif diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 7a16de9..4526ba6 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -849,6 +849,7 @@ struct rq { u64 rt_avg; u64 age_stamp; struct sched_avg avg_rt; + struct sched_avg avg_dl; u64 idle_stamp; u64 avg_idle;