From patchwork Mon Sep 12 07:47:48 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 75967 Delivered-To: patch@linaro.org Received: by 10.140.106.72 with SMTP id d66csp725608qgf; Mon, 12 Sep 2016 00:48:45 -0700 (PDT) X-Received: by 10.98.67.193 with SMTP id l62mr30721680pfi.16.1473666525478; Mon, 12 Sep 2016 00:48:45 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b10si20400173paf.144.2016.09.12.00.48.45; Mon, 12 Sep 2016 00:48:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932335AbcILHsi (ORCPT + 27 others); Mon, 12 Sep 2016 03:48:38 -0400 Received: from mail-wm0-f51.google.com ([74.125.82.51]:37188 "EHLO mail-wm0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753142AbcILHsd (ORCPT ); Mon, 12 Sep 2016 03:48:33 -0400 Received: by mail-wm0-f51.google.com with SMTP id c131so42767188wmh.0 for ; Mon, 12 Sep 2016 00:48:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=hNyK+/A21Hqp2UK0XdQ27GM3wIencHcauIMkW9HrWXE=; b=SF9Fv4R9EVOKMWbQ+NNN1l7Adkvrs0qLjhb8Fnl3L5E/tr8eA9G0DlM8D2pOJZ4wUH Sa8sKJQeZE0jk/L33468pqy4jodOcqA7WnIUqAa6O1MihIOfqsV7XKRHcRAu0bcV9ZHC 3r8n7z27XyljGDpPz+scq70P3jE8tPt9pi4d4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=hNyK+/A21Hqp2UK0XdQ27GM3wIencHcauIMkW9HrWXE=; b=gJhbik9fgZ2UQSOxS6hk2vH/ggDwdqQu4eM0jK/Rq9hW2iyc3RWY0kMHMVY2X3IJIR pXgcHKuRXSjRSt8Xf6lMvRWLrBdMfC2zI760JVm6lQ2Fxp9WWIGRahdWoeE6sVPppl56 q9BMrCM7WttX8et7/aw3RrS3kFrvxp8IxYnT2b1fTbpEHG2rmy3+Z/wBsycjiAC/v3ov mb8Jz/6Hd8nhZt989NYxgm9H6KfQ7lo+omEIhlTOouTLY+9BPsGzSCdTpsFoSt8tQvfE k5E7PNVfdiTDkbpdbr7R+FrEUdiQH249wrbZ5AqUtUdH0Xib7VC6c26DvbqGNleeG5W1 lIbA== X-Gm-Message-State: AE9vXwOnVdaiETnwvaHRw1UGQJQ/71R3nrAdn+dbWMKlbrLjE3qwAPOY5zAd9Z8r+tG0LbgL X-Received: by 10.194.41.35 with SMTP id c3mr13368470wjl.90.1473666512117; Mon, 12 Sep 2016 00:48:32 -0700 (PDT) Received: from localhost.localdomain (pas72-3-88-189-71-117.fbx.proxad.net. [88.189.71.117]) by smtp.gmail.com with ESMTPSA id i195sm16502821wmg.14.2016.09.12.00.48.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 12 Sep 2016 00:48:31 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, yuyang.du@intel.com, Morten.Rasmussen@arm.com Cc: linaro-kernel@lists.linaro.org, dietmar.eggemann@arm.com, pjt@google.com, bsegall@google.com, Vincent Guittot Subject: [PATCH 3/7 v3] sched: factorize PELT update Date: Mon, 12 Sep 2016 09:47:48 +0200 Message-Id: <1473666472-13749-4-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1473666472-13749-1-git-send-email-vincent.guittot@linaro.org> References: <1473666472-13749-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Every time, we modify load/utilization of sched_entity, we start to sync it with its cfs_rq. This update is done is different ways: -when attaching/detaching a sched_entity, we update cfs_rq and then we sync the entity with the cfs_rq. -when enqueueing/dequeuing the sched_entity, we update both sched_entity and cfs_rq metrics to now. Use update_load_avg everytime we have to update and sync cfs_rq and sched_entity before changing the state of a sched_enity Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 68 ++++++++++++++--------------------------------------- 1 file changed, 17 insertions(+), 51 deletions(-) -- 1.9.1 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 264119a..0aa1d7d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3086,7 +3086,8 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq) } /* Update task and its cfs_rq load average */ -static inline void update_load_avg(struct sched_entity *se, int update_tg) +static inline void update_load_avg(struct sched_entity *se, int update_tg, + int skip_aging) { struct cfs_rq *cfs_rq = cfs_rq_of(se); u64 now = cfs_rq_clock_task(cfs_rq); @@ -3097,7 +3098,8 @@ static inline void update_load_avg(struct sched_entity *se, int update_tg) * Track task load average for carrying it to new CPU after migrated, and * track group sched_entity load average for task_h_load calc in migration */ - __update_load_avg(now, cpu, &se->avg, + if (se->avg.last_update_time && !skip_aging) + __update_load_avg(now, cpu, &se->avg, se->on_rq * scale_load_down(se->load.weight), cfs_rq->curr == se, NULL); @@ -3115,26 +3117,6 @@ static inline void update_load_avg(struct sched_entity *se, int update_tg) */ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { - if (!sched_feat(ATTACH_AGE_LOAD)) - goto skip_aging; - - /* - * If we got migrated (either between CPUs or between cgroups) we'll - * have aged the average right before clearing @last_update_time. - * - * Or we're fresh through post_init_entity_util_avg(). - */ - if (se->avg.last_update_time) { - __update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)), - &se->avg, 0, 0, NULL); - - /* - * XXX: we could have just aged the entire load away if we've been - * absent from the fair class for too long. - */ - } - -skip_aging: se->avg.last_update_time = cfs_rq->avg.last_update_time; cfs_rq->avg.load_avg += se->avg.load_avg; cfs_rq->avg.load_sum += se->avg.load_sum; @@ -3154,9 +3136,6 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s */ static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { - __update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)), - &se->avg, se->on_rq * scale_load_down(se->load.weight), - cfs_rq->curr == se, NULL); sub_positive(&cfs_rq->avg.load_avg, se->avg.load_avg); sub_positive(&cfs_rq->avg.load_sum, se->avg.load_sum); @@ -3171,34 +3150,20 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { struct sched_avg *sa = &se->avg; - u64 now = cfs_rq_clock_task(cfs_rq); - int migrated, decayed; - - migrated = !sa->last_update_time; - if (!migrated) { - __update_load_avg(now, cpu_of(rq_of(cfs_rq)), sa, - se->on_rq * scale_load_down(se->load.weight), - cfs_rq->curr == se, NULL); - } - - decayed = update_cfs_rq_load_avg(now, cfs_rq, !migrated); cfs_rq->runnable_load_avg += sa->load_avg; cfs_rq->runnable_load_sum += sa->load_sum; - if (migrated) + if (!sa->last_update_time) { attach_entity_load_avg(cfs_rq, se); - - if (decayed || migrated) update_tg_load_avg(cfs_rq, 0); + } } /* Remove the runnable load generated by se from cfs_rq's runnable load average */ static inline void dequeue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { - update_load_avg(se, 1); - cfs_rq->runnable_load_avg = max_t(long, cfs_rq->runnable_load_avg - se->avg.load_avg, 0); cfs_rq->runnable_load_sum = @@ -3272,7 +3237,8 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq) return 0; } -static inline void update_load_avg(struct sched_entity *se, int not_used) +static inline void update_load_avg(struct sched_entity *se, + int not_used1, int not_used2) { struct cfs_rq *cfs_rq = cfs_rq_of(se); struct rq *rq = rq_of(cfs_rq); @@ -3420,6 +3386,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) if (renorm && !curr) se->vruntime += cfs_rq->min_vruntime; + update_load_avg(se, 1, 0); enqueue_entity_load_avg(cfs_rq, se); account_entity_enqueue(cfs_rq, se); update_cfs_shares(cfs_rq); @@ -3494,6 +3461,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) * Update run-time statistics of the 'current'. */ update_curr(cfs_rq); + update_load_avg(se, 1, 0); dequeue_entity_load_avg(cfs_rq, se); update_stats_dequeue(cfs_rq, se, flags); @@ -3572,7 +3540,7 @@ set_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *se) */ update_stats_wait_end(cfs_rq, se); __dequeue_entity(cfs_rq, se); - update_load_avg(se, 1); + update_load_avg(se, 1, 0); } update_stats_curr_start(cfs_rq, se); @@ -3674,7 +3642,7 @@ static void put_prev_entity(struct cfs_rq *cfs_rq, struct sched_entity *prev) /* Put 'current' back into the tree. */ __enqueue_entity(cfs_rq, prev); /* in !on_rq case, update occurred at dequeue */ - update_load_avg(prev, 0); + update_load_avg(prev, 0, 0); } cfs_rq->curr = NULL; } @@ -3690,7 +3658,7 @@ entity_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr, int queued) /* * Ensure that runnable average is periodically updated. */ - update_load_avg(curr, 1); + update_load_avg(curr, 1, 0); update_cfs_shares(cfs_rq); #ifdef CONFIG_SCHED_HRTICK @@ -4579,7 +4547,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) if (cfs_rq_throttled(cfs_rq)) break; - update_load_avg(se, 1); + update_load_avg(se, 1, 0); update_cfs_shares(cfs_rq); } @@ -4638,7 +4606,7 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) if (cfs_rq_throttled(cfs_rq)) break; - update_load_avg(se, 1); + update_load_avg(se, 1, 0); update_cfs_shares(cfs_rq); } @@ -8517,7 +8485,6 @@ static void detach_task_cfs_rq(struct task_struct *p) { struct sched_entity *se = &p->se; struct cfs_rq *cfs_rq = cfs_rq_of(se); - u64 now = cfs_rq_clock_task(cfs_rq); if (!vruntime_normalized(p)) { /* @@ -8529,7 +8496,7 @@ static void detach_task_cfs_rq(struct task_struct *p) } /* Catch up with the cfs_rq and remove our load when we leave */ - update_cfs_rq_load_avg(now, cfs_rq, false); + update_load_avg(se, 0, 0); detach_entity_load_avg(cfs_rq, se); update_tg_load_avg(cfs_rq, false); } @@ -8537,7 +8504,6 @@ static void detach_task_cfs_rq(struct task_struct *p) static void attach_entity_cfs_rq(struct sched_entity *se) { struct cfs_rq *cfs_rq = cfs_rq_of(se); - u64 now = cfs_rq_clock_task(cfs_rq); #ifdef CONFIG_FAIR_GROUP_SCHED /* @@ -8548,7 +8514,7 @@ static void attach_entity_cfs_rq(struct sched_entity *se) #endif /* Synchronize task with its cfs_rq */ - update_cfs_rq_load_avg(now, cfs_rq, false); + update_load_avg(se, 0, !sched_feat(ATTACH_AGE_LOAD)); attach_entity_load_avg(cfs_rq, se); update_tg_load_avg(cfs_rq, false); }