diff mbox

[v4,08/12] sched: move cfs task on a CPU with higher capacity

Message ID 1406569906-9763-9-git-send-email-vincent.guittot@linaro.org
State New
Headers show

Commit Message

Vincent Guittot July 28, 2014, 5:51 p.m. UTC
If the CPU is used for handling lot of IRQs, trig a load balance to check if
it's worth moving its tasks on another CPU that has more capacity.

As a sidenote, this will note generate more spurious ilb because we already
trig an ilb if there is more than 1 busy cpu. If this cpu is the only one that
has a task, we will trig the ilb once for migrating the task.

The nohz_kick_needed function has been cleaned up a bit while adding the new
test

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 69 +++++++++++++++++++++++++++++++++++++----------------
 1 file changed, 49 insertions(+), 20 deletions(-)

Comments

Rik van Riel July 28, 2014, 6:43 p.m. UTC | #1
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 07/28/2014 01:51 PM, Vincent Guittot wrote:
> If the CPU is used for handling lot of IRQs, trig a load balance to
> check if it's worth moving its tasks on another CPU that has more
> capacity.
> 
> As a sidenote, this will note generate more spurious ilb because we
> already trig an ilb if there is more than 1 busy cpu. If this cpu
> is the only one that has a task, we will trig the ilb once for
> migrating the task.
> 
> The nohz_kick_needed function has been cleaned up a bit while
> adding the new test
> 
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> --- 
> kernel/sched/fair.c | 69
> +++++++++++++++++++++++++++++++++++++---------------- 1 file
> changed, 49 insertions(+), 20 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index
> 6843016..1cde8dd 100644 --- a/kernel/sched/fair.c +++
> b/kernel/sched/fair.c @@ -5969,6 +5969,14 @@ static bool
> update_sd_pick_busiest(struct lb_env *env, return true; }
> 
> +	/* +	 * The group capacity is reduced probably because of
> activity from other +	 * sched class or interrupts which use part
> of the available capacity +	 */ +	if ((sg->sgc->capacity_orig *
> 100) > (sgs->group_capacity * +				env->sd->imbalance_pct)) +
> return true; + return false; }

Isn't this already reflected in avg_load, by having avg_load increase
due to the capacity decreasing when a cpu is busy with non-CFS loads?

Also, this part of update_sd_pick_busiest will not be reached in the
!SD_ASYM_PACKING case once my patch is applied, so this is a small
conflict between our series :)

- -- 
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJT1pm+AAoJEM553pKExN6DysEH/1/s2dY0EWM4mVctRAcyASc+
qD5yQgDZEbnIzuUldtTRNwlAxHSaexLI7oF418RAaUV3oue+OQIesJPhKDVrR2+J
fLo4fhtutuF1SHJ5Zo2fiGBIUI+GuLspT2fXiG2UxXQXtYioVdDeB+cjo6H3xQ3D
R2hR+WeSsLznwFhRnufI1neleIRpqk/Nw1wfdXCyE03kM478rCQjlygMUx6eqURn
jblr7GY7jmtzhYFPY9qnE0za/WHWUVAf4RXSjCcuYwZqdhbzmHPKpJyiC3cl9XGz
kNzAjqVzvSyCBblaOnJfrRyCWmBdatp5eZQCzZK9/l+YpbkkzQJNlUMAAVV1eNw=
=qD2A
-----END PGP SIGNATURE-----
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Vincent Guittot July 29, 2014, 7:40 a.m. UTC | #2
On 28 July 2014 20:43, Rik van Riel <riel@redhat.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On 07/28/2014 01:51 PM, Vincent Guittot wrote:
>> If the CPU is used for handling lot of IRQs, trig a load balance to
>> check if it's worth moving its tasks on another CPU that has more
>> capacity.
>>
>> As a sidenote, this will note generate more spurious ilb because we
>> already trig an ilb if there is more than 1 busy cpu. If this cpu
>> is the only one that has a task, we will trig the ilb once for
>> migrating the task.
>>
>> The nohz_kick_needed function has been cleaned up a bit while
>> adding the new test
>>
>> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> ---
>> kernel/sched/fair.c | 69
>> +++++++++++++++++++++++++++++++++++++---------------- 1 file
>> changed, 49 insertions(+), 20 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index
>> 6843016..1cde8dd 100644 --- a/kernel/sched/fair.c +++
>> b/kernel/sched/fair.c @@ -5969,6 +5969,14 @@ static bool
>> update_sd_pick_busiest(struct lb_env *env, return true; }
>>
>> +     /* +     * The group capacity is reduced probably because of
>> activity from other +  * sched class or interrupts which use part
>> of the available capacity +    */ +   if ((sg->sgc->capacity_orig *
>> 100) > (sgs->group_capacity * +                               env->sd->imbalance_pct)) +
>> return true; + return false; }
>
> Isn't this already reflected in avg_load, by having avg_load increase
> due to the capacity decreasing when a cpu is busy with non-CFS loads?

Yes, the avg_load should probably increase but it doesn't mean that it
will be selected for a load balancing.

>
> Also, this part of update_sd_pick_busiest will not be reached in the
> !SD_ASYM_PACKING case once my patch is applied, so this is a small
> conflict between our series :)

yes, this will conflict with your change but the conflict should be
obvious to solve as it only adds a new condition for picking the group

>
> - --
> All rights reversed
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQEcBAEBAgAGBQJT1pm+AAoJEM553pKExN6DysEH/1/s2dY0EWM4mVctRAcyASc+
> qD5yQgDZEbnIzuUldtTRNwlAxHSaexLI7oF418RAaUV3oue+OQIesJPhKDVrR2+J
> fLo4fhtutuF1SHJ5Zo2fiGBIUI+GuLspT2fXiG2UxXQXtYioVdDeB+cjo6H3xQ3D
> R2hR+WeSsLznwFhRnufI1neleIRpqk/Nw1wfdXCyE03kM478rCQjlygMUx6eqURn
> jblr7GY7jmtzhYFPY9qnE0za/WHWUVAf4RXSjCcuYwZqdhbzmHPKpJyiC3cl9XGz
> kNzAjqVzvSyCBblaOnJfrRyCWmBdatp5eZQCzZK9/l+YpbkkzQJNlUMAAVV1eNw=
> =qD2A
> -----END PGP SIGNATURE-----
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
diff mbox

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6843016..1cde8dd 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5969,6 +5969,14 @@  static bool update_sd_pick_busiest(struct lb_env *env,
 			return true;
 	}
 
+	/*
+	 * The group capacity is reduced probably because of activity from other
+	 * sched class or interrupts which use part of the available capacity
+	 */
+	if ((sg->sgc->capacity_orig * 100) > (sgs->group_capacity *
+				env->sd->imbalance_pct))
+		return true;
+
 	return false;
 }
 
@@ -6455,13 +6463,23 @@  static int need_active_balance(struct lb_env *env)
 	struct sched_domain *sd = env->sd;
 
 	if (env->idle == CPU_NEWLY_IDLE) {
+		int src_cpu = env->src_cpu;
 
 		/*
 		 * ASYM_PACKING needs to force migrate tasks from busy but
 		 * higher numbered CPUs in order to pack all tasks in the
 		 * lowest numbered CPUs.
 		 */
-		if ((sd->flags & SD_ASYM_PACKING) && env->src_cpu > env->dst_cpu)
+		if ((sd->flags & SD_ASYM_PACKING) && src_cpu > env->dst_cpu)
+			return 1;
+
+		/*
+		 * If the CPUs share their cache and the src_cpu's capacity is
+		 * reduced because of other sched_class or IRQs, we trig an
+		 * active balance to move the task
+		 */
+		if ((capacity_orig_of(src_cpu) * 100) > (capacity_of(src_cpu) *
+				sd->imbalance_pct))
 			return 1;
 	}
 
@@ -6563,6 +6581,8 @@  static int load_balance(int this_cpu, struct rq *this_rq,
 
 	schedstat_add(sd, lb_imbalance[idle], env.imbalance);
 
+	env.src_cpu = busiest->cpu;
+
 	ld_moved = 0;
 	if (busiest->nr_running > 1) {
 		/*
@@ -6572,7 +6592,6 @@  static int load_balance(int this_cpu, struct rq *this_rq,
 		 * correctly treated as an imbalance.
 		 */
 		env.flags |= LBF_ALL_PINNED;
-		env.src_cpu   = busiest->cpu;
 		env.src_rq    = busiest;
 		env.loop_max  = min(sysctl_sched_nr_migrate, busiest->nr_running);
 
@@ -7262,10 +7281,12 @@  static void nohz_idle_balance(struct rq *this_rq, enum cpu_idle_type idle)
 
 /*
  * Current heuristic for kicking the idle load balancer in the presence
- * of an idle cpu is the system.
+ * of an idle cpu in the system.
  *   - This rq has more than one task.
- *   - At any scheduler domain level, this cpu's scheduler group has multiple
- *     busy cpu's exceeding the group's capacity.
+ *   - This rq has at least one CFS task and the capacity of the CPU is
+ *     significantly reduced because of RT tasks or IRQs.
+ *   - At parent of LLC scheduler domain level, this cpu's scheduler group has
+ *     multiple busy cpu.
  *   - For SD_ASYM_PACKING, if the lower numbered cpu's in the scheduler
  *     domain span are idle.
  */
@@ -7275,9 +7296,10 @@  static inline int nohz_kick_needed(struct rq *rq)
 	struct sched_domain *sd;
 	struct sched_group_capacity *sgc;
 	int nr_busy, cpu = rq->cpu;
+	bool kick = false;
 
 	if (unlikely(rq->idle_balance))
-		return 0;
+		return false;
 
        /*
 	* We may be recently in ticked or tickless idle mode. At the first
@@ -7291,38 +7313,45 @@  static inline int nohz_kick_needed(struct rq *rq)
 	 * balancing.
 	 */
 	if (likely(!atomic_read(&nohz.nr_cpus)))
-		return 0;
+		return false;
 
 	if (time_before(now, nohz.next_balance))
-		return 0;
+		return false;
 
 	if (rq->nr_running >= 2)
-		goto need_kick;
+		return true;
 
 	rcu_read_lock();
 	sd = rcu_dereference(per_cpu(sd_busy, cpu));
-
 	if (sd) {
 		sgc = sd->groups->sgc;
 		nr_busy = atomic_read(&sgc->nr_busy_cpus);
 
-		if (nr_busy > 1)
-			goto need_kick_unlock;
+		if (nr_busy > 1) {
+			kick = true;
+			goto unlock;
+		}
+
 	}
 
-	sd = rcu_dereference(per_cpu(sd_asym, cpu));
+	sd = rcu_dereference(rq->sd);
+	if (sd) {
+		if ((rq->cfs.h_nr_running >= 1) &&
+				((rq->cpu_capacity * sd->imbalance_pct) <
+				(rq->cpu_capacity_orig * 100))) {
+			kick = true;
+			goto unlock;
+		}
+	}
 
+	sd = rcu_dereference(per_cpu(sd_asym, cpu));
 	if (sd && (cpumask_first_and(nohz.idle_cpus_mask,
 				  sched_domain_span(sd)) < cpu))
-		goto need_kick_unlock;
+		kick = true;
 
+unlock:
 	rcu_read_unlock();
-	return 0;
-
-need_kick_unlock:
-	rcu_read_unlock();
-need_kick:
-	return 1;
+	return kick;
 }
 #else
 static void nohz_idle_balance(struct rq *this_rq, enum cpu_idle_type idle) { }