diff mbox

[RFCv6,05/10] sched/{core,fair}: trigger OPP change request on fork()

Message ID 1449641971-20827-6-git-send-email-smuckle@linaro.org
State Superseded
Headers show

Commit Message

Steve Muckle Dec. 9, 2015, 6:19 a.m. UTC
From: Juri Lelli <juri.lelli@arm.com>


Patch "sched/fair: add triggers for OPP change requests" introduced OPP
change triggers for enqueue_task_fair(), but the trigger was operating only
for wakeups. Fact is that it makes sense to consider wakeup_new also (i.e.,
fork()), as we don't know anything about a newly created task and thus we
most certainly want to jump to max OPP to not harm performance too much.

However, it is not currently possible (or at least it wasn't evident to me
how to do so :/) to tell new wakeups from other (non wakeup) operations.

This patch introduces an additional flag in sched.h that is only set at
fork() time and it is then consumed in enqueue_task_fair() for our purpose.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Juri Lelli <juri.lelli@arm.com>

Signed-off-by: Steve Muckle <smuckle@linaro.org>

---
 kernel/sched/core.c  | 2 +-
 kernel/sched/fair.c  | 9 +++------
 kernel/sched/sched.h | 1 +
 3 files changed, 5 insertions(+), 7 deletions(-)

-- 
2.4.10

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
diff mbox

Patch

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index aa3f978..4c8c353e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2402,7 +2402,7 @@  void wake_up_new_task(struct task_struct *p)
 #endif
 
 	rq = __task_rq_lock(p);
-	activate_task(rq, p, 0);
+	activate_task(rq, p, ENQUEUE_WAKEUP_NEW);
 	p->on_rq = TASK_ON_RQ_QUEUED;
 	trace_sched_wakeup_new(p);
 	check_preempt_curr(rq, p, WF_FORK);
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 904188a..1bfbbb7 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4224,7 +4224,8 @@  enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 {
 	struct cfs_rq *cfs_rq;
 	struct sched_entity *se = &p->se;
-	int task_new = !(flags & ENQUEUE_WAKEUP);
+	int task_new = flags & ENQUEUE_WAKEUP_NEW;
+	int task_wakeup = flags & ENQUEUE_WAKEUP;
 
 	for_each_sched_entity(se) {
 		if (se->on_rq)
@@ -4265,12 +4266,8 @@  enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 		 * because we get here also during load balancing, but
 		 * in these cases it seems wise to trigger as single
 		 * request after load balancing is done.
-		 *
-		 * XXX: how about fork()? Do we need a special
-		 *      flag/something to tell if we are here after a
-		 *      fork() (wakeup_task_new)?
 		 */
-		if (!task_new)
+		if (task_new || task_wakeup)
 			update_capacity_of(cpu_of(rq));
 	}
 	hrtick_update(rq);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index a88dbec..ad82274 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1139,6 +1139,7 @@  extern const u32 sched_prio_to_wmult[40];
 #endif
 #define ENQUEUE_REPLENISH	0x08
 #define ENQUEUE_RESTORE	0x10
+#define ENQUEUE_WAKEUP_NEW	0x20
 
 #define DEQUEUE_SLEEP		0x01
 #define DEQUEUE_SAVE		0x02