From patchwork Fri Apr 1 16:38:37 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leo Yan X-Patchwork-Id: 64892 Delivered-To: patch@linaro.org Received: by 10.112.199.169 with SMTP id jl9csp835848lbc; Fri, 1 Apr 2016 09:39:13 -0700 (PDT) X-Received: by 10.66.235.129 with SMTP id um1mr32238954pac.17.1459528752889; Fri, 01 Apr 2016 09:39:12 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id cu5si18849510pad.104.2016.04.01.09.39.12; Fri, 01 Apr 2016 09:39:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752592AbcDAQjK (ORCPT + 29 others); Fri, 1 Apr 2016 12:39:10 -0400 Received: from mail-qg0-f52.google.com ([209.85.192.52]:36303 "EHLO mail-qg0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750907AbcDAQjI (ORCPT ); Fri, 1 Apr 2016 12:39:08 -0400 Received: by mail-qg0-f52.google.com with SMTP id f52so4409636qga.3 for ; Fri, 01 Apr 2016 09:39:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=A8QBbbysXi2VmzpSMVnruWZD9Q6jU+HqmHfl50aM2QA=; b=MCjFZIeq2yB7ARAHiSpQHXo3POaCwdNF0rqoAkDuLtQSt6kpwx4rRTdaK3zIxf17h3 45g7ojpE1GF29JKdTQ6uv43SpQ3Ab0XqoZxfTv7RqbzEHATm9WGgAVvFM8F2JhxlIMm4 ov1vo8H7M/fSSOcGASXQaFTtN+DI8Tp797KiE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=A8QBbbysXi2VmzpSMVnruWZD9Q6jU+HqmHfl50aM2QA=; b=lco5MRzgdTA4V/lgGQY1rbxzPjxhzbwKJtz7AOKgEuKtvNOXqXL+VSX9uoz8NgS4Kp 2fcpVx2TJjQutI3ItXSnoWRSib6aZp6mA1VAIipzPGOGS+pkIT4bLAjSI+AKGtlH1uS7 HUz2qrehVGgId2NdAhh/iyWHHuGryGIPXpgXaAp4zbN58dIeSfyZfHO4Yrp2XKwSh7q7 HeMvYnA5QzLh6P0HIfZjg1MGtmdDYJ3EWhSKCTxCZKPYsUuNuZLBs4nx7F3+ooKqzo/3 2EEY97jFCK1sO5My3U2Z3UNwtxXQyIuE99yVDKRAGPqOlNEIE/CepSzz0cDKicWXgikT cXMw== X-Gm-Message-State: AD7BkJLQI+H0uS4/M9HGa2KQDojl1MLDCC7kydsr2Q+q6h9q3HAUWemiYnKtrHaCwDnUJ6JK X-Received: by 10.140.25.206 with SMTP id 72mr26106041qgt.49.1459528747432; Fri, 01 Apr 2016 09:39:07 -0700 (PDT) Received: from localhost.localdomain ([45.56.149.55]) by smtp.gmail.com with ESMTPSA id y200sm6420460qka.48.2016.04.01.09.39.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 01 Apr 2016 09:39:06 -0700 (PDT) From: Leo Yan To: Ingo Molnar , Peter Zijlstra , Morten Rasmussen , Dietmar Eggemann , Vincent Guittot , Steve Muckle Cc: linux-kernel@vger.kernel.org, eas-dev@lists.linaro.org, Leo Yan Subject: [PATCH RFC] sched/fair: let cpu's cfs_rq to reflect task migration Date: Sat, 2 Apr 2016 00:38:37 +0800 Message-Id: <1459528717-17339-1-git-send-email-leo.yan@linaro.org> X-Mailer: git-send-email 1.9.1 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When task is migrated from CPU_A to CPU_B, scheduler will decrease the task's load/util from the task's cfs_rq and also add them into migrated cfs_rq. But if kernel enables CONFIG_FAIR_GROUP_SCHED then this cfs_rq is not the same one with cpu's cfs_rq. As a result, after task is migrated to CPU_B, then CPU_A still have task's stale value for load/util; on the other hand CPU_B also cannot reflect new load/util which introduced by the task. So this patch is to operate the task's load/util to cpu's cfs_rq, so finally cpu's cfs_rq can really reflect task's migration. Signed-off-by: Leo Yan --- kernel/sched/fair.c | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) -- 1.9.1 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0fe30e6..10ca1a9 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2825,12 +2825,24 @@ static inline u64 cfs_rq_clock_task(struct cfs_rq *cfs_rq); static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) { struct sched_avg *sa = &cfs_rq->avg; + struct sched_avg *cpu_sa = NULL; int decayed, removed = 0; + int cpu = cpu_of(rq_of(cfs_rq)); + + if (&cpu_rq(cpu)->cfs != cfs_rq) + cpu_sa = &cpu_rq(cpu)->cfs.avg; if (atomic_long_read(&cfs_rq->removed_load_avg)) { s64 r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0); sa->load_avg = max_t(long, sa->load_avg - r, 0); sa->load_sum = max_t(s64, sa->load_sum - r * LOAD_AVG_MAX, 0); + + if (cpu_sa) { + cpu_sa->load_avg = max_t(long, cpu_sa->load_avg - r, 0); + cpu_sa->load_sum = max_t(s64, + cpu_sa->load_sum - r * LOAD_AVG_MAX, 0); + } + removed = 1; } @@ -2838,6 +2850,12 @@ static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) long r = atomic_long_xchg(&cfs_rq->removed_util_avg, 0); sa->util_avg = max_t(long, sa->util_avg - r, 0); sa->util_sum = max_t(s32, sa->util_sum - r * LOAD_AVG_MAX, 0); + + if (cpu_sa) { + cpu_sa->util_avg = max_t(long, cpu_sa->util_avg - r, 0); + cpu_sa->util_sum = max_t(s64, + cpu_sa->util_sum - r * LOAD_AVG_MAX, 0); + } } decayed = __update_load_avg(now, cpu_of(rq_of(cfs_rq)), sa, @@ -2896,6 +2914,8 @@ static inline void update_load_avg(struct sched_entity *se, int update_tg) static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { + int cpu = cpu_of(rq_of(cfs_rq)); + if (!sched_feat(ATTACH_AGE_LOAD)) goto skip_aging; @@ -2919,6 +2939,13 @@ skip_aging: cfs_rq->avg.load_sum += se->avg.load_sum; cfs_rq->avg.util_avg += se->avg.util_avg; cfs_rq->avg.util_sum += se->avg.util_sum; + + if (&cpu_rq(cpu)->cfs != cfs_rq) { + cpu_rq(cpu)->cfs.avg.load_avg += se->avg.load_avg; + cpu_rq(cpu)->cfs.avg.load_sum += se->avg.load_sum; + cpu_rq(cpu)->cfs.avg.util_avg += se->avg.util_avg; + cpu_rq(cpu)->cfs.avg.util_sum += se->avg.util_sum; + } } static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)