From patchwork Wed Apr 19 16:44:17 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 97664 Delivered-To: patch@linaro.org Received: by 10.140.109.52 with SMTP id k49csp418534qgf; Wed, 19 Apr 2017 10:11:10 -0700 (PDT) X-Received: by 10.98.105.199 with SMTP id e190mr3933091pfc.87.1492621870213; Wed, 19 Apr 2017 10:11:10 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m32si3342886pld.61.2017.04.19.10.11.09; Wed, 19 Apr 2017 10:11:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1030238AbdDSRK6 (ORCPT + 16 others); Wed, 19 Apr 2017 13:10:58 -0400 Received: from mail-wm0-f51.google.com ([74.125.82.51]:33054 "EHLO mail-wm0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966624AbdDSQpY (ORCPT ); Wed, 19 Apr 2017 12:45:24 -0400 Received: by mail-wm0-f51.google.com with SMTP id y18so18815629wmh.0 for ; Wed, 19 Apr 2017 09:45:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=dge3D3CeqIqDw3RW+f2hjtV5/jF1+jne3naukSPrTbI=; b=C50v8FCvaE+AMj86p8FopI1G9Se4KCqg289A4QlkUcpb2QJtaXzyDqJFSu6nl7rAC1 rXKbUKlxDkAWkTOlYTHZsZVfLkDxSYK3byhsXVQbjA9DQqB0zkeuqYrhWwhXWLVc+pnc nQnTvLrimsyeB1amQFOGqyJsN8GDBglYQYGjU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=dge3D3CeqIqDw3RW+f2hjtV5/jF1+jne3naukSPrTbI=; b=IoCyKkiNFB8hzgrzTjcFTfTowk8X+NtOvhaIsgM3TqYtlhpbpIh/Ucqr5i3ltQMxih TT7AXNYWSRIySHwYdcLhgcW42oWAYEZwM7gh0DvYYqECVL0pLfe3nxNpKMm2ifXOx6Eb zrE7xAaTMXCsYXfhr9DOUiQ9CAgVLlwBN5/X2L2crohD8haEm7UNYjOMvuZkw6x/5GPa H7DLieRk8m6UvNbkqRJLw03sa8/AmGZUgGYAC6zxgFILr110VqTbkZz+winEjFsBArlZ EdVKTmrNdFej8MHXQIk99P3gr4K9M6BsWiVtZM/wAlEb0EV/8m8yAtaSMmDoZjO1oW9d SK/g== X-Gm-Message-State: AN3rC/6CKRJjNcACGvwbGrvkY7RBOVNN+UZTyeUVgqM1wUPgLNfKigAz ETi0SJtG7mT8sERaLXbbSg== X-Received: by 10.28.236.210 with SMTP id h79mr3696061wmi.92.1492620312737; Wed, 19 Apr 2017 09:45:12 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:ddb5:fd5c:4a08:a3bc]) by smtp.gmail.com with ESMTPSA id k45sm4031407wrk.15.2017.04.19.09.45.11 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 19 Apr 2017 09:45:12 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org Cc: dietmar.eggemann@arm.com, Morten.Rasmussen@arm.com, yuyang.du@intel.com, pjt@google.com, bsegall@google.com, Vincent Guittot Subject: [PATCH 2/2] sched/cfs: take into account current time segment Date: Wed, 19 Apr 2017 18:44:17 +0200 Message-Id: <1492620257-30109-3-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1492620257-30109-1-git-send-email-vincent.guittot@linaro.org> References: <1492619370-29246-1-git-send-email-vincent.guittot@linaro.org> <1492620257-30109-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org To not consider the current time segment adds unwanted latency in the load/util_avg responsivness especially when the time is scaled instead of the contribution. Instead of waiting for the current time segment to have fully elapsed before accounting it in load/util_avg, we can already account the elapsed part but change the range used to compute load/util_avg accordingly. At the very beginning of a new time segment, the past segments have been decayed and the max value is MAX_LOAD_AVG*y. At the very end of the current time segment, the max value becomes 1024(us) + MAX_LOAD_AVG*y which is equal to MAX_LOAD_AVG. In fact, the max value is sa->period_contrib + MAX_LOAD_AVG*y at any time in the time segment. Taking advantage of the fact that MAX_LOAD_AVG*y == MAX_LOAD_AVG-1024, the range becomes [0..MAX_LOAD_AVG-1024+sa->period_contrib]. As the elapsed part is already accounted in load/util_sum, we update the max value according to the current position in the time segment instead of removing its contribution. Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) -- 2.7.4 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f74da94..c3b8f0f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3017,15 +3017,12 @@ ___update_load_avg(u64 now, int cpu, struct sched_avg *sa, /* * Step 2: update *_avg. */ - sa->load_avg = div_u64((sa->load_sum - sa->period_contrib * weight), - (LOAD_AVG_MAX - 1024)); + sa->load_avg = div_u64(sa->load_sum, LOAD_AVG_MAX - 1024 + sa->period_contrib); if (cfs_rq) { cfs_rq->runnable_load_avg = - div_u64((cfs_rq->runnable_load_sum - sa->period_contrib * weight), - (LOAD_AVG_MAX - 1024)); + div_u64(cfs_rq->runnable_load_sum, LOAD_AVG_MAX - 1024 + sa->period_contrib); } - sa->util_avg = (sa->util_sum - (running * sa->period_contrib << SCHED_CAPACITY_SHIFT)) / - (LOAD_AVG_MAX - 1024); + sa->util_avg = sa->util_sum / (LOAD_AVG_MAX - 1024 + sa->period_contrib); return 1; }