From patchwork Mon Oct 17 09:14:10 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 77721 Delivered-To: patch@linaro.org Received: by 10.140.97.247 with SMTP id m110csp299968qge; Mon, 17 Oct 2016 02:15:29 -0700 (PDT) X-Received: by 10.99.112.72 with SMTP id a8mr30191918pgn.43.1476695729446; Mon, 17 Oct 2016 02:15:29 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c3si25006371paw.162.2016.10.17.02.15.29; Mon, 17 Oct 2016 02:15:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933941AbcJQJP0 (ORCPT + 27 others); Mon, 17 Oct 2016 05:15:26 -0400 Received: from mail-qk0-f174.google.com ([209.85.220.174]:32940 "EHLO mail-qk0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933829AbcJQJPI (ORCPT ); Mon, 17 Oct 2016 05:15:08 -0400 Received: by mail-qk0-f174.google.com with SMTP id n189so222008292qke.0 for ; Mon, 17 Oct 2016 02:15:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Joi5OtJjeeKAPq1d9FCIthbOu8v3k0uqapkvPSmbc3s=; b=Gim8zNnZaKSrp27hseMJP5ZSFr8lEvHpV6x8FVfSQYNzhiv7bbE92MBwELvWvLlTlq frZubo30jLUFxQNiTxk0VRi+O+CZSuwV03w0QgQ9oi9+eR69T6SFPRQ1QYW7yMSBL5w1 AV5YKp5bJjk4NKzY21A2zFEifsNIOd3D+jvIw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Joi5OtJjeeKAPq1d9FCIthbOu8v3k0uqapkvPSmbc3s=; b=bNrF92fg6rnaGDT93gJnBfzDHLzOJQU/knWZUgBOX8RYwbu+/vc/COUXOQrS5MaAAg MZH2giIgKbYxrQckvoK2g+RR2D1mstF8lOylXTJC0IsYcSH30PSwN3pz7x0ETSruvXBW LTbqlS1LC3a0QBNsLVWP7PCkMosdyGBIGwcI1FHvGUlw4+SUxpj1hdABv67aTwdNLxT5 VSFSmaBUOEXUoYNW/jJtWbcFWAKLfhVZQL7DYXBV7pvxcU+GRFoQtfA3DaTVf8l4t5EJ iEstPVencuZS31VSqqpEhy+Dtwn8zsy78UachJeSOfS8v5j8KoMG3V8mf4tWNRxZOfT0 gIZQ== X-Gm-Message-State: AA6/9RklYMdPk4zqXyD7potPDDzz6BHi85BxO4PNsdZwP3ootCtNY2SN6oMuR1AdyBpKOeVx X-Received: by 10.194.104.40 with SMTP id gb8mr10686560wjb.28.1476695706378; Mon, 17 Oct 2016 02:15:06 -0700 (PDT) Received: from localhost.localdomain ([2a01:e35:8bd4:7750:80a0:ca36:f872:d659]) by smtp.gmail.com with ESMTPSA id f2sm51341206wjr.2.2016.10.17.02.15.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 17 Oct 2016 02:15:05 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, dietmar.eggemann@arm.com Cc: yuyang.du@intel.com, Morten.Rasmussen@arm.com, linaro-kernel@lists.linaro.org, pjt@google.com, bsegall@google.com, kernellwp@gmail.com, Vincent Guittot Subject: [PATCH 3/6 v5] sched: factorize PELT update Date: Mon, 17 Oct 2016 11:14:10 +0200 Message-Id: <1476695653-12309-4-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1476695653-12309-1-git-send-email-vincent.guittot@linaro.org> References: <1476695653-12309-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Every time, we modify load/utilization of sched_entity, we start to sync it with its cfs_rq. This update is done is different ways: -when attaching/detaching a sched_entity, we update cfs_rq and then we sync the entity with the cfs_rq. -when enqueueing/dequeuing the sched_entity, we update both sched_entity and cfs_rq metrics to now. Use update_load_avg everytime we have to update and sync cfs_rq and sched_entity before changing the state of a sched_enity Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 75 +++++++++++++++++------------------------------------ 1 file changed, 24 insertions(+), 51 deletions(-) -- 2.7.4 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index ecc66d6..52cbc92 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3098,8 +3098,14 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq) return decayed || removed_load; } +/* + * Optional action to be done while updating the load average + */ +#define UPDATE_TG 0x1 +#define SKIP_AGE_LOAD 0x2 + /* Update task and its cfs_rq load average */ -static inline void update_load_avg(struct sched_entity *se, int update_tg) +static inline void update_load_avg(struct sched_entity *se, int flags) { struct cfs_rq *cfs_rq = cfs_rq_of(se); u64 now = cfs_rq_clock_task(cfs_rq); @@ -3110,11 +3116,12 @@ static inline void update_load_avg(struct sched_entity *se, int update_tg) * Track task load average for carrying it to new CPU after migrated, and * track group sched_entity load average for task_h_load calc in migration */ - __update_load_avg(now, cpu, &se->avg, + if (se->avg.last_update_time && !(flags & SKIP_AGE_LOAD)) + __update_load_avg(now, cpu, &se->avg, se->on_rq * scale_load_down(se->load.weight), cfs_rq->curr == se, NULL); - if (update_cfs_rq_load_avg(now, cfs_rq, true) && update_tg) + if (update_cfs_rq_load_avg(now, cfs_rq, true) && (flags & UPDATE_TG)) update_tg_load_avg(cfs_rq, 0); } @@ -3128,26 +3135,6 @@ static inline void update_load_avg(struct sched_entity *se, int update_tg) */ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { - if (!sched_feat(ATTACH_AGE_LOAD)) - goto skip_aging; - - /* - * If we got migrated (either between CPUs or between cgroups) we'll - * have aged the average right before clearing @last_update_time. - * - * Or we're fresh through post_init_entity_util_avg(). - */ - if (se->avg.last_update_time) { - __update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)), - &se->avg, 0, 0, NULL); - - /* - * XXX: we could have just aged the entire load away if we've been - * absent from the fair class for too long. - */ - } - -skip_aging: se->avg.last_update_time = cfs_rq->avg.last_update_time; cfs_rq->avg.load_avg += se->avg.load_avg; cfs_rq->avg.load_sum += se->avg.load_sum; @@ -3167,9 +3154,6 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s */ static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { - __update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)), - &se->avg, se->on_rq * scale_load_down(se->load.weight), - cfs_rq->curr == se, NULL); sub_positive(&cfs_rq->avg.load_avg, se->avg.load_avg); sub_positive(&cfs_rq->avg.load_sum, se->avg.load_sum); @@ -3184,34 +3168,20 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { struct sched_avg *sa = &se->avg; - u64 now = cfs_rq_clock_task(cfs_rq); - int migrated, decayed; - - migrated = !sa->last_update_time; - if (!migrated) { - __update_load_avg(now, cpu_of(rq_of(cfs_rq)), sa, - se->on_rq * scale_load_down(se->load.weight), - cfs_rq->curr == se, NULL); - } - - decayed = update_cfs_rq_load_avg(now, cfs_rq, !migrated); cfs_rq->runnable_load_avg += sa->load_avg; cfs_rq->runnable_load_sum += sa->load_sum; - if (migrated) + if (!sa->last_update_time) { attach_entity_load_avg(cfs_rq, se); - - if (decayed || migrated) update_tg_load_avg(cfs_rq, 0); + } } /* Remove the runnable load generated by se from cfs_rq's runnable load average */ static inline void dequeue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { - update_load_avg(se, 1); - cfs_rq->runnable_load_avg = max_t(long, cfs_rq->runnable_load_avg - se->avg.load_avg, 0); cfs_rq->runnable_load_sum = @@ -3285,7 +3255,10 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq) return 0; } -static inline void update_load_avg(struct sched_entity *se, int not_used) +#define UPDATE_TG 0x0 +#define SKIP_AGE_LOAD 0x0 + +static inline void update_load_avg(struct sched_entity *se, int not_used1) { struct cfs_rq *cfs_rq = cfs_rq_of(se); struct rq *rq = rq_of(cfs_rq); @@ -3433,6 +3406,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) if (renorm && !curr) se->vruntime += cfs_rq->min_vruntime; + update_load_avg(se, UPDATE_TG); enqueue_entity_load_avg(cfs_rq, se); account_entity_enqueue(cfs_rq, se); update_cfs_shares(cfs_rq); @@ -3507,6 +3481,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) * Update run-time statistics of the 'current'. */ update_curr(cfs_rq); + update_load_avg(se, UPDATE_TG); dequeue_entity_load_avg(cfs_rq, se); update_stats_dequeue(cfs_rq, se, flags); @@ -3594,7 +3569,7 @@ set_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *se) */ update_stats_wait_end(cfs_rq, se); __dequeue_entity(cfs_rq, se); - update_load_avg(se, 1); + update_load_avg(se, UPDATE_TG); } update_stats_curr_start(cfs_rq, se); @@ -3712,7 +3687,7 @@ entity_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr, int queued) /* * Ensure that runnable average is periodically updated. */ - update_load_avg(curr, 1); + update_load_avg(curr, UPDATE_TG); update_cfs_shares(cfs_rq); #ifdef CONFIG_SCHED_HRTICK @@ -4601,7 +4576,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) if (cfs_rq_throttled(cfs_rq)) break; - update_load_avg(se, 1); + update_load_avg(se, UPDATE_TG); update_cfs_shares(cfs_rq); } @@ -4660,7 +4635,7 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) if (cfs_rq_throttled(cfs_rq)) break; - update_load_avg(se, 1); + update_load_avg(se, UPDATE_TG); update_cfs_shares(cfs_rq); } @@ -8715,7 +8690,6 @@ static void detach_task_cfs_rq(struct task_struct *p) { struct sched_entity *se = &p->se; struct cfs_rq *cfs_rq = cfs_rq_of(se); - u64 now = cfs_rq_clock_task(cfs_rq); if (!vruntime_normalized(p)) { /* @@ -8727,7 +8701,7 @@ static void detach_task_cfs_rq(struct task_struct *p) } /* Catch up with the cfs_rq and remove our load when we leave */ - update_cfs_rq_load_avg(now, cfs_rq, false); + update_load_avg(se, 0); detach_entity_load_avg(cfs_rq, se); update_tg_load_avg(cfs_rq, false); } @@ -8735,7 +8709,6 @@ static void detach_task_cfs_rq(struct task_struct *p) static void attach_entity_cfs_rq(struct sched_entity *se) { struct cfs_rq *cfs_rq = cfs_rq_of(se); - u64 now = cfs_rq_clock_task(cfs_rq); #ifdef CONFIG_FAIR_GROUP_SCHED /* @@ -8746,7 +8719,7 @@ static void attach_entity_cfs_rq(struct sched_entity *se) #endif /* Synchronize entity with its cfs_rq */ - update_cfs_rq_load_avg(now, cfs_rq, false); + update_load_avg(se, sched_feat(ATTACH_AGE_LOAD) ? 0 : SKIP_AGE_LOAD); attach_entity_load_avg(cfs_rq, se); update_tg_load_avg(cfs_rq, false); }