From patchwork Tue Oct 20 12:04:41 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dietmar Eggemann X-Patchwork-Id: 55301 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f200.google.com (mail-lb0-f200.google.com [209.85.217.200]) by patches.linaro.org (Postfix) with ESMTPS id 7DC6722EA2 for ; Tue, 20 Oct 2015 12:05:00 +0000 (UTC) Received: by lbcao8 with SMTP id ao8sf6432793lbc.1 for ; Tue, 20 Oct 2015 05:04:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:content-type:content-transfer-encoding:sender :precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe; bh=M7+0Ca/8tXXui/7vEOac9Vm7okrWvWHQwAHlMias7Wg=; b=i0YQD3rxb6lX33lHoMqqUj+Bk2tbvXAqZbZAw1HK7yEpdb7pdNqWZk1iUK1vwklP03 htiWmXwl/UwKkwAK5ZxRyDy9B62I2yO9sxGvxcoXTln+vj/4HlKpyPMpNvuKBuB99Oj/ q2N+ldn6SWSfYyKKqmCIKffH6e3aXPpIfnn+rsDOFBtQcbaX+t1FpRe4FVu3GLzh5H8F dTn97REH5N3FeZu1SX3qyXDmulw0OFANBFd/OA2NQfzgr3J4yJhd5NSGhunqJyjzD4dV Z/3/pSolofNAOYMp8iMdRU1OYnVQa95Acm9O6R3rzx+VQPGzqiZO7rIV5LdhP90uaUJO T4nQ== X-Gm-Message-State: ALoCoQkiOgIZxURPd8t2yITI59gnMgJOwdGGlqpSjablRdIWxt/7S2EXYbYk+yyuwokZAoN7MK8B X-Received: by 10.180.79.68 with SMTP id h4mr649847wix.6.1445342699426; Tue, 20 Oct 2015 05:04:59 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.25.147.194 with SMTP id v185ls36691lfd.22.gmail; Tue, 20 Oct 2015 05:04:59 -0700 (PDT) X-Received: by 10.112.54.169 with SMTP id k9mr1550846lbp.95.1445342699073; Tue, 20 Oct 2015 05:04:59 -0700 (PDT) Received: from mail-lb0-f181.google.com (mail-lb0-f181.google.com. [209.85.217.181]) by mx.google.com with ESMTPS id m141si1997921lfg.133.2015.10.20.05.04.58 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 20 Oct 2015 05:04:58 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.181 as permitted sender) client-ip=209.85.217.181; Received: by lbcao8 with SMTP id ao8so13360485lbc.3 for ; Tue, 20 Oct 2015 05:04:58 -0700 (PDT) X-Received: by 10.112.199.137 with SMTP id jk9mr1629150lbc.86.1445342698629; Tue, 20 Oct 2015 05:04:58 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.59.35 with SMTP id w3csp2039804lbq; Tue, 20 Oct 2015 05:04:57 -0700 (PDT) X-Received: by 10.50.143.10 with SMTP id sa10mr11343928igb.28.1445342697638; Tue, 20 Oct 2015 05:04:57 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h3si18139776iga.6.2015.10.20.05.04.57; Tue, 20 Oct 2015 05:04:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752010AbbJTMEz (ORCPT + 28 others); Tue, 20 Oct 2015 08:04:55 -0400 Received: from eu-smtp-delivery-143.mimecast.com ([207.82.80.143]:55544 "EHLO eu-smtp-delivery-143.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751564AbbJTMEy (ORCPT ); Tue, 20 Oct 2015 08:04:54 -0400 Received: from cam-owa2.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.140]) by eu-smtp-1.mimecast.com with ESMTP id uk-mta-26-nTIVGOZZS9OlxJDvTTd9RQ-1; Tue, 20 Oct 2015 13:04:51 +0100 Received: from e107985-lin.cambridge.arm.com ([10.1.2.79]) by cam-owa2.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Tue, 20 Oct 2015 13:04:50 +0100 From: Dietmar Eggemann To: Peter Zijlstra , Ingo Molnar Cc: linux-kernel@vger.kernel.org, Yuyang Du Subject: [PATCH] sched/fair: Remove empty idle enter and exit functions. Date: Tue, 20 Oct 2015 13:04:41 +0100 Message-Id: <1445342681-17171-1-git-send-email-dietmar.eggemann@arm.com> X-Mailer: git-send-email 1.9.1 X-OriginalArrivalTime: 20 Oct 2015 12:04:50.0775 (UTC) FILETIME=[85DE1E70:01D10B2F] X-MC-Unique: nTIVGOZZS9OlxJDvTTd9RQ-1 Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: dietmar.eggemann@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.181 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Commit cd126afe838d ("sched/fair: Remove rq's runnable avg") got rid of rq->avg and so there is no need to update it any more when entering or exiting idle. Remove the now empty functions idle_{enter|exit}_fair(). Signed-off-by: Dietmar Eggemann --- kernel/sched/fair.c | 24 +----------------------- kernel/sched/idle_task.c | 1 - kernel/sched/sched.h | 8 -------- 3 files changed, 1 insertion(+), 32 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 824aa9f501a3..54e2cb4ed027 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2835,24 +2835,6 @@ void remove_entity_load_avg(struct sched_entity *se) atomic_long_add(se->avg.util_avg, &cfs_rq->removed_util_avg); } -/* - * Update the rq's load with the elapsed running time before entering - * idle. if the last scheduled task is not a CFS task, idle_enter will - * be the only way to update the runnable statistic. - */ -void idle_enter_fair(struct rq *this_rq) -{ -} - -/* - * Update the rq's load with the elapsed idle time before a task is - * scheduled. if the newly scheduled task is not a CFS task, idle_exit will - * be the only way to update the runnable statistic. - */ -void idle_exit_fair(struct rq *this_rq) -{ -} - static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq) { return cfs_rq->runnable_load_avg; @@ -7248,8 +7230,6 @@ static int idle_balance(struct rq *this_rq) int pulled_task = 0; u64 curr_cost = 0; - idle_enter_fair(this_rq); - /* * We must set idle_stamp _before_ calling idle_balance(), such that we * measure the duration of idle_balance() as idle time. @@ -7330,10 +7310,8 @@ static int idle_balance(struct rq *this_rq) if (this_rq->nr_running != this_rq->cfs.h_nr_running) pulled_task = -1; - if (pulled_task) { - idle_exit_fair(this_rq); + if (pulled_task) this_rq->idle_stamp = 0; - } return pulled_task; } diff --git a/kernel/sched/idle_task.c b/kernel/sched/idle_task.c index c4ae0f1fdf9b..47ce94931f1b 100644 --- a/kernel/sched/idle_task.c +++ b/kernel/sched/idle_task.c @@ -47,7 +47,6 @@ dequeue_task_idle(struct rq *rq, struct task_struct *p, int flags) static void put_prev_task_idle(struct rq *rq, struct task_struct *prev) { - idle_exit_fair(rq); rq_last_tick_reset(rq); } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index efd3bfc7e347..2eb2002aa336 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1249,16 +1249,8 @@ extern void update_group_capacity(struct sched_domain *sd, int cpu); extern void trigger_load_balance(struct rq *rq); -extern void idle_enter_fair(struct rq *this_rq); -extern void idle_exit_fair(struct rq *this_rq); - extern void set_cpus_allowed_common(struct task_struct *p, const struct cpumask *new_mask); -#else - -static inline void idle_enter_fair(struct rq *rq) { } -static inline void idle_exit_fair(struct rq *rq) { } - #endif #ifdef CONFIG_CPU_IDLE