From patchwork Tue Nov 8 09:53:44 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 81268 Delivered-To: patch@linaro.org Received: by 10.182.113.165 with SMTP id iz5csp1596289obb; Tue, 8 Nov 2016 01:55:11 -0800 (PST) X-Received: by 10.99.139.199 with SMTP id j190mr17554016pge.115.1478598911546; Tue, 08 Nov 2016 01:55:11 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a5si20421775pat.319.2016.11.08.01.55.11; Tue, 08 Nov 2016 01:55:11 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933140AbcKHJyl (ORCPT + 27 others); Tue, 8 Nov 2016 04:54:41 -0500 Received: from mail-wm0-f45.google.com ([74.125.82.45]:32885 "EHLO mail-wm0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932443AbcKHJyi (ORCPT ); Tue, 8 Nov 2016 04:54:38 -0500 Received: by mail-wm0-f45.google.com with SMTP id c184so35153728wmd.0 for ; Tue, 08 Nov 2016 01:54:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=j5P9uncRzTtfe/WH7L9SdVzoaQ5kYnFkF3m1p9tW8ac=; b=UuaKIy52pMTMzUnVh1gfsFqZUwzVnLyZuORS75UdnDpxUDksRCpCkxZlHj7LigtrU+ cZLfZRKiYjsrL0SK0snBSLIXZxPQGG4EcLDtSoojA+o+d5sNhVqwJwXHqRDHBHwp+vKe iLMpltS30Ah6pmU4vPpG6FVKzFDIz0OrS+IMo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=j5P9uncRzTtfe/WH7L9SdVzoaQ5kYnFkF3m1p9tW8ac=; b=ODJki+9tHBwf42vVXoi8TVqoJ/REcoaYZctDwNoktHBheZN6kyC1PmPcdeea/t1Ktd fe3gpVhNejPMQSOMxsAaUeC5OjadIGktXXjNC/C0p2sqUsnu3XwLBIUaQoialXAXfYUT pCgGEyLLiocryWUPpNOtAYMHENm1rrGiSxjcS61Yd86wpdt4komSgZ9rzkbDBYEJT/h+ U9CCMaVR89cX4gMrnlVwvXVPXX0LsvAqiV3XDZ6qVJPQe2s/qrBQdAZBcvjm45gOyHRH +caR3yRyJBzQnT4jPLE1WJ2x0QLSF4RsN9XklbQxw8jAaK0LRLm7kGI/NiSN7oimWhup pkjQ== X-Gm-Message-State: ABUngvcsP8FVqEPo+Hs7OD6fScBsLixRQSzn3Ebr/VN5S5kBLnWaSzQCmwPk487HOaUXvJPh X-Received: by 10.28.211.132 with SMTP id k126mr11014417wmg.66.1478598876559; Tue, 08 Nov 2016 01:54:36 -0800 (PST) Received: from localhost.localdomain ([2a01:e35:8bd4:7750:6483:2475:9666:6640]) by smtp.gmail.com with ESMTPSA id p13sm18606435wmd.20.2016.11.08.01.54.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 08 Nov 2016 01:54:36 -0800 (PST) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, dietmar.eggemann@arm.com Cc: yuyang.du@intel.com, Morten.Rasmussen@arm.com, pjt@google.com, bsegall@google.com, kernellwp@gmail.com, Vincent Guittot Subject: [PATCH 3/6 v7] sched: factorize PELT update Date: Tue, 8 Nov 2016 10:53:44 +0100 Message-Id: <1478598827-32372-4-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1478598827-32372-1-git-send-email-vincent.guittot@linaro.org> References: <1478598827-32372-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Every time, we modify load/utilization of sched_entity, we start to sync it with its cfs_rq. This update is done is different ways: -when attaching/detaching a sched_entity, we update cfs_rq and then we sync the entity with the cfs_rq. -when enqueueing/dequeuing the sched_entity, we update both sched_entity and cfs_rq metrics to now. Use update_load_avg everytime we have to update and sync cfs_rq and sched_entity before changing the state of a sched_enity Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 76 ++++++++++++++++++----------------------------------- 1 file changed, 25 insertions(+), 51 deletions(-) -- 2.7.4 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bc5949d..f18e42e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3099,8 +3099,14 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq) return decayed || removed_load; } +/* + * Optional action to be done while updating the load average + */ +#define UPDATE_TG 0x1 +#define SKIP_AGE_LOAD 0x2 + /* Update task and its cfs_rq load average */ -static inline void update_load_avg(struct sched_entity *se, int update_tg) +static inline void update_load_avg(struct sched_entity *se, int flags) { struct cfs_rq *cfs_rq = cfs_rq_of(se); u64 now = cfs_rq_clock_task(cfs_rq); @@ -3111,11 +3117,13 @@ static inline void update_load_avg(struct sched_entity *se, int update_tg) * Track task load average for carrying it to new CPU after migrated, and * track group sched_entity load average for task_h_load calc in migration */ - __update_load_avg(now, cpu, &se->avg, + if (se->avg.last_update_time && !(flags & SKIP_AGE_LOAD)) { + __update_load_avg(now, cpu, &se->avg, se->on_rq * scale_load_down(se->load.weight), cfs_rq->curr == se, NULL); + } - if (update_cfs_rq_load_avg(now, cfs_rq, true) && update_tg) + if (update_cfs_rq_load_avg(now, cfs_rq, true) && (flags & UPDATE_TG)) update_tg_load_avg(cfs_rq, 0); } @@ -3129,26 +3137,6 @@ static inline void update_load_avg(struct sched_entity *se, int update_tg) */ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { - if (!sched_feat(ATTACH_AGE_LOAD)) - goto skip_aging; - - /* - * If we got migrated (either between CPUs or between cgroups) we'll - * have aged the average right before clearing @last_update_time. - * - * Or we're fresh through post_init_entity_util_avg(). - */ - if (se->avg.last_update_time) { - __update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)), - &se->avg, 0, 0, NULL); - - /* - * XXX: we could have just aged the entire load away if we've been - * absent from the fair class for too long. - */ - } - -skip_aging: se->avg.last_update_time = cfs_rq->avg.last_update_time; cfs_rq->avg.load_avg += se->avg.load_avg; cfs_rq->avg.load_sum += se->avg.load_sum; @@ -3168,9 +3156,6 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s */ static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { - __update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)), - &se->avg, se->on_rq * scale_load_down(se->load.weight), - cfs_rq->curr == se, NULL); sub_positive(&cfs_rq->avg.load_avg, se->avg.load_avg); sub_positive(&cfs_rq->avg.load_sum, se->avg.load_sum); @@ -3185,34 +3170,20 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { struct sched_avg *sa = &se->avg; - u64 now = cfs_rq_clock_task(cfs_rq); - int migrated, decayed; - - migrated = !sa->last_update_time; - if (!migrated) { - __update_load_avg(now, cpu_of(rq_of(cfs_rq)), sa, - se->on_rq * scale_load_down(se->load.weight), - cfs_rq->curr == se, NULL); - } - - decayed = update_cfs_rq_load_avg(now, cfs_rq, !migrated); cfs_rq->runnable_load_avg += sa->load_avg; cfs_rq->runnable_load_sum += sa->load_sum; - if (migrated) + if (!sa->last_update_time) { attach_entity_load_avg(cfs_rq, se); - - if (decayed || migrated) update_tg_load_avg(cfs_rq, 0); + } } /* Remove the runnable load generated by se from cfs_rq's runnable load average */ static inline void dequeue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { - update_load_avg(se, 1); - cfs_rq->runnable_load_avg = max_t(long, cfs_rq->runnable_load_avg - se->avg.load_avg, 0); cfs_rq->runnable_load_sum = @@ -3286,7 +3257,10 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq) return 0; } -static inline void update_load_avg(struct sched_entity *se, int not_used) +#define UPDATE_TG 0x0 +#define SKIP_AGE_LOAD 0x0 + +static inline void update_load_avg(struct sched_entity *se, int not_used1) { cpufreq_update_util(rq_of(cfs_rq_of(se)), 0); } @@ -3431,6 +3405,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) if (renorm && !curr) se->vruntime += cfs_rq->min_vruntime; + update_load_avg(se, UPDATE_TG); enqueue_entity_load_avg(cfs_rq, se); account_entity_enqueue(cfs_rq, se); update_cfs_shares(cfs_rq); @@ -3505,6 +3480,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) * Update run-time statistics of the 'current'. */ update_curr(cfs_rq); + update_load_avg(se, UPDATE_TG); dequeue_entity_load_avg(cfs_rq, se); update_stats_dequeue(cfs_rq, se, flags); @@ -3592,7 +3568,7 @@ set_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *se) */ update_stats_wait_end(cfs_rq, se); __dequeue_entity(cfs_rq, se); - update_load_avg(se, 1); + update_load_avg(se, UPDATE_TG); } update_stats_curr_start(cfs_rq, se); @@ -3710,7 +3686,7 @@ entity_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr, int queued) /* * Ensure that runnable average is periodically updated. */ - update_load_avg(curr, 1); + update_load_avg(curr, UPDATE_TG); update_cfs_shares(cfs_rq); #ifdef CONFIG_SCHED_HRTICK @@ -4607,7 +4583,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) if (cfs_rq_throttled(cfs_rq)) break; - update_load_avg(se, 1); + update_load_avg(se, UPDATE_TG); update_cfs_shares(cfs_rq); } @@ -4666,7 +4642,7 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) if (cfs_rq_throttled(cfs_rq)) break; - update_load_avg(se, 1); + update_load_avg(se, UPDATE_TG); update_cfs_shares(cfs_rq); } @@ -8725,10 +8701,9 @@ static inline bool vruntime_normalized(struct task_struct *p) static void detach_entity_cfs_rq(struct sched_entity *se) { struct cfs_rq *cfs_rq = cfs_rq_of(se); - u64 now = cfs_rq_clock_task(cfs_rq); /* Catch up with the cfs_rq and remove our load when we leave */ - update_cfs_rq_load_avg(now, cfs_rq, false); + update_load_avg(se, 0); detach_entity_load_avg(cfs_rq, se); update_tg_load_avg(cfs_rq, false); } @@ -8736,7 +8711,6 @@ static void detach_entity_cfs_rq(struct sched_entity *se) static void attach_entity_cfs_rq(struct sched_entity *se) { struct cfs_rq *cfs_rq = cfs_rq_of(se); - u64 now = cfs_rq_clock_task(cfs_rq); #ifdef CONFIG_FAIR_GROUP_SCHED /* @@ -8747,7 +8721,7 @@ static void attach_entity_cfs_rq(struct sched_entity *se) #endif /* Synchronize entity with its cfs_rq */ - update_cfs_rq_load_avg(now, cfs_rq, false); + update_load_avg(se, sched_feat(ATTACH_AGE_LOAD) ? 0 : SKIP_AGE_LOAD); attach_entity_load_avg(cfs_rq, se); update_tg_load_avg(cfs_rq, false); }