diff mbox

sched: put rq's sched_avg under CONFIG_FAIR_GROUP_SCHED

Message ID 1393328862-19997-1-git-send-email-dietmar.eggemann@arm.com
State New
Headers show

Commit Message

Dietmar Eggemann Feb. 25, 2014, 11:47 a.m. UTC
From: Dietmar Eggemann <Dietmar.Eggemann@arm.com>

The struct sched_avg of struct rq is only used in case group
scheduling is enabled inside __update_tg_runnable_avg() to update
per-cpu representation of a task group.  I.e. that there is no need to
maintain the runnable avg of a rq in the !CONFIG_FAIR_GROUP_SCHED case.

This patch guards struct sched_avg of struct rq and
update_rq_runnable_avg() with CONFIG_FAIR_GROUP_SCHED.

There is an extra empty definition for update_rq_runnable_avg()
necessary for the !CONFIG_FAIR_GROUP_SCHED && CONFIG_SMP case.

The function print_cfs_group_stats() which prints out struct sched_avg
of struct rq is already guarded with CONFIG_FAIR_GROUP_SCHED.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
---

Hi,

I was just wondering what the overall policy is when it comes to guard
specific functionality in the scheduler code?  Do we want to to guard
something like the fair group scheduling support completely?

The patch is against tip/master .

 kernel/sched/fair.c  |   13 +++++++------
 kernel/sched/sched.h |    2 ++
 2 files changed, 9 insertions(+), 6 deletions(-)

Comments

Dietmar Eggemann Feb. 25, 2014, 5:47 p.m. UTC | #1
On 25/02/14 13:16, Srikar Dronamraju wrote:
>> The struct sched_avg of struct rq is only used in case group
>> scheduling is enabled inside __update_tg_runnable_avg() to update
>> per-cpu representation of a task group.  I.e. that there is no need to
>> maintain the runnable avg of a rq in the !CONFIG_FAIR_GROUP_SCHED case.
>>
>> This patch guards struct sched_avg of struct rq and
>> update_rq_runnable_avg() with CONFIG_FAIR_GROUP_SCHED.
>>
> 
> While this patch looks good, I see fields in sched_avg viz decay_count,
> last_runnable_update, load_avg_contrib  only relevant to sched_entity.
> i.e they don't seem to be updated or used for rq->avg. Should we look at
> splitting sched_avg so that rq->avg doesn't have unwanted fields?

Yes, AFAICS at least load_avg_contrib and decay_count are only relevant
for struct sched_entity whereas last_runnable_update is used in
__update_entity_runnable_avg() itself.

By having struct sched_avg embedded into struct sched_entity and struct
rq, __update_entity_runnable_avg() and __update_tg_runnable_avg() can be
used on both and moreover, all current members of struct sched_avg
belong logically together.

With this patch I was more interested in the fact that we do not have to
maintain the load avg for struct rq in a !CONFIG_FAIR_GROUP_SCHED system.

-- Dietmar

> 
> --
> Thanks and Regards
> Srikar Dronamraju
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
diff mbox

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5f6ddbef80af..76c6513b6889 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2376,12 +2376,19 @@  static inline void __update_group_entity_contrib(struct sched_entity *se)
 		se->avg.load_avg_contrib >>= NICE_0_SHIFT;
 	}
 }
+
+static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
+{
+	__update_entity_runnable_avg(rq_clock_task(rq), &rq->avg, runnable);
+	__update_tg_runnable_avg(&rq->avg, &rq->cfs);
+}
 #else /* CONFIG_FAIR_GROUP_SCHED */
 static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
 						 int force_update) {}
 static inline void __update_tg_runnable_avg(struct sched_avg *sa,
 						  struct cfs_rq *cfs_rq) {}
 static inline void __update_group_entity_contrib(struct sched_entity *se) {}
+static inline void update_rq_runnable_avg(struct rq *rq, int runnable) {}
 #endif /* CONFIG_FAIR_GROUP_SCHED */
 
 static inline void __update_task_entity_contrib(struct sched_entity *se)
@@ -2480,12 +2487,6 @@  static void update_cfs_rq_blocked_load(struct cfs_rq *cfs_rq, int force_update)
 	__update_cfs_rq_tg_load_contrib(cfs_rq, force_update);
 }
 
-static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
-{
-	__update_entity_runnable_avg(rq_clock_task(rq), &rq->avg, runnable);
-	__update_tg_runnable_avg(&rq->avg, &rq->cfs);
-}
-
 /* Add the load generated by se into cfs_rq's child load-average */
 static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
 						  struct sched_entity *se,
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 4be68da1fe00..63beab7512e7 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -630,7 +630,9 @@  struct rq {
 	struct llist_head wake_list;
 #endif
 
+#ifdef CONFIG_FAIR_GROUP_SCHED
 	struct sched_avg avg;
+#endif
 };
 
 static inline int cpu_of(struct rq *rq)