From patchwork Wed Apr 19 16:44:16 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 97665 Delivered-To: patch@linaro.org Received: by 10.140.109.52 with SMTP id k49csp418680qgf; Wed, 19 Apr 2017 10:11:31 -0700 (PDT) X-Received: by 10.98.206.205 with SMTP id y196mr4097879pfg.108.1492621891422; Wed, 19 Apr 2017 10:11:31 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w12si3344369pld.49.2017.04.19.10.11.31; Wed, 19 Apr 2017 10:11:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S968150AbdDSRLR (ORCPT + 16 others); Wed, 19 Apr 2017 13:11:17 -0400 Received: from mail-wr0-f170.google.com ([209.85.128.170]:34789 "EHLO mail-wr0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966619AbdDSQpX (ORCPT ); Wed, 19 Apr 2017 12:45:23 -0400 Received: by mail-wr0-f170.google.com with SMTP id z109so19530920wrb.1 for ; Wed, 19 Apr 2017 09:45:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=HRongTskRZH3mYbjfebCWs21gOSVC0zHPNYTZpUGZEQ=; b=EEmoF5NDkBQM0/LH08QowZ7Snd6/OBkejuz3y1Vssb/nK3Gn2Cyk28sGwTxIw1leOW 8kiTyDRzN+mOkPG5v/lbLYWFUYLfUxtQB1h6t73RNK7Vs/CIAeNL2SnyFwTavI9CSDPQ TuPFUXVZchZVo/7ucct1UFuzHSMkKl1Lf9HoA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=HRongTskRZH3mYbjfebCWs21gOSVC0zHPNYTZpUGZEQ=; b=ovf7MgNj4MNehmAjj97rythoeDuK85t+oaAQ9qHwYuMbnm/++7n0MKxY0fnaFVlEnL bOmDLJQ3N69uFm8o0gbhLXveyGRW2nec+droszh9c5DP0PDoUkTY2Ly38s9/qPcAoD1P fMW4ZjjzkrDR20wnBeuFbkKDdgWcUt6lKaH8nZCm7nbWNKUsDM3bZPXm4xN/KU4L+3eV 4ophnNWJ0k0IQ3ptFDhYUws/ayHh0gWlahoanvyG4VODfSxbAYNSrPge4sLu0FsyJ3nu gBjFVRIP1Le4gdI6arESA8QcuZp1SCg5YvSBLebq4xTei7K7Bl0hqj3iPd8hVF6T8rMT IlAA== X-Gm-Message-State: AN3rC/4F0ssnJVOuWTzg0nJsLz4ahQL7GXxwrSmtpLoead0KiUyddTos bw4yU9DXmQKbs878 X-Received: by 10.223.130.212 with SMTP id 78mr3579147wrc.106.1492620311756; Wed, 19 Apr 2017 09:45:11 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:ddb5:fd5c:4a08:a3bc]) by smtp.gmail.com with ESMTPSA id k45sm4031407wrk.15.2017.04.19.09.45.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 19 Apr 2017 09:45:11 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org Cc: dietmar.eggemann@arm.com, Morten.Rasmussen@arm.com, yuyang.du@intel.com, pjt@google.com, bsegall@google.com, Vincent Guittot Subject: [PATCH 1/2] sched/cfs: make util/load_avg more stable Date: Wed, 19 Apr 2017 18:44:16 +0200 Message-Id: <1492620257-30109-2-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1492620257-30109-1-git-send-email-vincent.guittot@linaro.org> References: <1492619370-29246-1-git-send-email-vincent.guittot@linaro.org> <1492620257-30109-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In the current implementation of load/util_avg, we assume that the ongoing time segment has fully elapsed, and util/load_sum is divided by LOAD_AVG_MAX, even if part of the time segment still remains. As a consequence, this remaining part is considered as idle time and generates unexpected variations of util_avg of a busy CPU in the range ]1002..1024[ whereas util_avg should stay at 1023. In order to keep the metric stable, we should not consider the ongoing time segment when computing load/util_avg but only the segments that have already fully elapsed. Suggested-by: Peter Zijlstra Signed-off-by: Vincent Guittot --- Sorry some unexpected characters appeared in the commit message of previous version kernel/sched/fair.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) -- 2.7.4 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 3f83a35..f74da94 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3017,12 +3017,15 @@ ___update_load_avg(u64 now, int cpu, struct sched_avg *sa, /* * Step 2: update *_avg. */ - sa->load_avg = div_u64(sa->load_sum, LOAD_AVG_MAX); + sa->load_avg = div_u64((sa->load_sum - sa->period_contrib * weight), + (LOAD_AVG_MAX - 1024)); if (cfs_rq) { cfs_rq->runnable_load_avg = - div_u64(cfs_rq->runnable_load_sum, LOAD_AVG_MAX); + div_u64((cfs_rq->runnable_load_sum - sa->period_contrib * weight), + (LOAD_AVG_MAX - 1024)); } - sa->util_avg = sa->util_sum / LOAD_AVG_MAX; + sa->util_avg = (sa->util_sum - (running * sa->period_contrib << SCHED_CAPACITY_SHIFT)) / + (LOAD_AVG_MAX - 1024); return 1; }