diff mbox

[RFC] sched: find the latest idle cpu

Message ID 52D61448.3070108@linaro.org
State New
Headers show

Commit Message

Alex Shi Jan. 15, 2014, 4:53 a.m. UTC
On 01/15/2014 12:48 PM, Alex Shi wrote:
> On 01/15/2014 12:31 PM, Michael wang wrote:
>> Hi, Alex
>>
>> On 01/15/2014 12:07 PM, Alex Shi wrote:
>> [snip] 		}
>>> +#ifdef CONFIG_NO_HZ_COMMON
>>> +		/*
>>> +		 * Coarsely to get the latest idle cpu for shorter latency and
>>> +		 * possible power benefit.
>>> +		 */
>>> +		if (!min_load) {
> 
> here should be !load.
>>> +			struct tick_sched *ts = &per_cpu(tick_cpu_sched, i);
>>> +
>>> +			s64 latest_wake = 0;
>>
>> I guess we missed some code for latest_wake here?
> 
> Yes, thanks for reminder!
> 
> so updated patch:
> 

ops, still incorrect. re-updated:
diff mbox

Patch

===

From 5d48303b3eb3b5ca7fde54a6dfcab79cff360403 Mon Sep 17 00:00:00 2001
From: Alex Shi <alex.shi@linaro.org>
Date: Tue, 14 Jan 2014 23:07:42 +0800
Subject: [PATCH] sched: find the latest idle cpu

Currently we just try to find least load cpu. If some cpus idled,
we just pick the first cpu in cpu mask.

In fact we can get the interrupted idle cpu or the latest idled cpu,
then we may get the benefit from both latency and power.
The selected cpu maybe not the best, since other cpu may be interrupted
during our selecting. But be captious costs too much.

Signed-off-by: Alex Shi <alex.shi@linaro.org>
---
 kernel/sched/fair.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c7395d9..e2c4cd9 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4161,12 +4161,38 @@  find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
 
 	/* Traverse only the allowed CPUs */
 	for_each_cpu_and(i, sched_group_cpus(group), tsk_cpus_allowed(p)) {
+		s64 latest_wake = 0;
+
 		load = weighted_cpuload(i);
 
 		if (load < min_load || (load == min_load && i == this_cpu)) {
 			min_load = load;
 			idlest = i;
 		}
+#ifdef CONFIG_NO_HZ_COMMON
+		/*
+		 * Coarsely to get the latest idle cpu for shorter latency and
+		 * possible power benefit.
+		 */
+		if (!load) {
+			struct tick_sched *ts = &per_cpu(tick_cpu_sched, i);
+
+			/* idle cpu doing irq */
+			if (ts->inidle && !ts->idle_active)
+				idlest = i;
+			/* the cpu resched */
+			else if (!ts->inidle)
+				idlest = i;
+			/* find latest idle cpu */
+			else {
+				s64 temp = ktime_to_us(ts->idle_entrytime);
+				if (temp > latest_wake) {
+					latest_wake = temp;
+					idlest = i;
+				}
+			}
+		}
+#endif
 	}
 
 	return idlest;