diff mbox

[v5,2/2] sched/deadline: Check to push the task away after its affinity was changed

Message ID 1436374709-26703-2-git-send-email-xlpang@126.com
State New
Headers show

Commit Message

Xunlei Pang July 8, 2015, 4:58 p.m. UTC
From: Xunlei Pang <pang.xunlei@linaro.org>

(Sync up the same behaviour as that of RT)

We may suffer from extra dl overload rq due to the affinity,
so when the affinity of any runnable dl task is changed, we
should check to trigger balancing, otherwise it will cause
some unnecessary delayed real-time response. Unfortunately,
current DL global scheduler does nothing about this.

This patch modified set_cpus_allowed_dl(), if the target task
is runnable but not running and not throttled, it tries to push
it away once it got migratable.

The patch also solves a problem about move_queued_task() called
in set_cpus_allowed_ptr():
When a larger deadline value dl task got migrated due to its curr
cpu isn't in the new affinity mask, after move_queued_task() it
will miss the chance of pushing away, because check_preempt_curr()
called by move_queued_task() doens't set the "need resched flag"
for larger deadline value tasks.

Signed-off-by: Xunlei Pang <pang.xunlei@linaro.org>
---
 kernel/sched/deadline.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)
diff mbox

Patch

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 20772ee..9f4691f 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1706,11 +1706,12 @@  static void set_cpus_allowed_dl(struct task_struct *p,
 	weight = cpumask_weight(new_mask);
 
 	/*
-	 * Only update if the process changes its state from whether it
-	 * can migrate or not.
+	 * Skip updating the migration stuff if the process doesn't change
+	 * its migrate state, but still need to check if it can be pushed
+	 * away due to its new affinity.
 	 */
 	if ((p->nr_cpus_allowed > 1) == (weight > 1))
-		return;
+		goto queue_push;
 
 	/*
 	 * The process used to be able to migrate OR it can now migrate
@@ -1727,6 +1728,13 @@  static void set_cpus_allowed_dl(struct task_struct *p,
 	}
 
 	update_dl_migration(&rq->dl);
+
+queue_push:
+	if (weight > 1 &&
+	    !task_running(rq, p) &&
+	    !test_tsk_need_resched(rq->curr) &&
+	    !cpumask_subset(new_mask, &p->cpus_allowed))
+		queue_push_tasks(rq);
 }
 
 /* Assumes rq->lock is held */