diff mbox

[6/7] sched: Make sched entity usage tracking scale-invariant

Message ID 1411403047-32010-7-git-send-email-morten.rasmussen@arm.com
State New
Headers show

Commit Message

Morten Rasmussen Sept. 22, 2014, 4:24 p.m. UTC
Apply scale-invariance correction factor to usage tracking as well.

cc: Paul Turner <pjt@google.com>
cc: Ben Segall <bsegall@google.com>

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
---
 kernel/sched/fair.c |   28 ++++++++++++++++------------
 1 file changed, 16 insertions(+), 12 deletions(-)

Comments

Ben Segall Sept. 22, 2014, 5:13 p.m. UTC | #1
Morten Rasmussen <morten.rasmussen@arm.com> writes:

> Apply scale-invariance correction factor to usage tracking as well.

It seems like it would make more sense to order the patches as first the
usage tracking and then all of the scale-invariance together, or perhaps
to just fold this into the usage tracking patch.
>
> cc: Paul Turner <pjt@google.com>
> cc: Ben Segall <bsegall@google.com>
>
> Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
> ---
>  kernel/sched/fair.c |   28 ++++++++++++++++------------
>  1 file changed, 16 insertions(+), 12 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index d8a8c83..c7aa8c1 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2302,9 +2302,9 @@ static __always_inline int __update_entity_runnable_avg(u64 now, int cpu,
>  							int runnable,
>  							int running)
>  {
> -	u64 delta, periods;
> -	u32 runnable_contrib;
> -	int delta_w, decayed = 0;
> +	u64 delta, scaled_delta, periods;
> +	u32 runnable_contrib, scaled_runnable_contrib;
> +	int delta_w, scaled_delta_w, decayed = 0;
>  	u32 scale_cap = arch_scale_load_capacity(cpu);
>  
>  	delta = now - sa->last_runnable_update;
> @@ -2339,11 +2339,12 @@ static __always_inline int __update_entity_runnable_avg(u64 now, int cpu,
>  		 */
>  		delta_w = 1024 - delta_w;
>  
> +		scaled_delta_w = (delta_w * scale_cap) >> SCHED_CAPACITY_SHIFT;
> +
>  		if (runnable)
> -			sa->runnable_avg_sum += (delta_w * scale_cap)
> -					>> SCHED_CAPACITY_SHIFT;
> +			sa->runnable_avg_sum += scaled_delta_w;
>  		if (running)
> -			sa->usage_avg_sum += delta_w;
> +			sa->usage_avg_sum += scaled_delta_w;
>  		sa->runnable_avg_period += delta_w;
>  
>  		delta -= delta_w;
> @@ -2361,20 +2362,23 @@ static __always_inline int __update_entity_runnable_avg(u64 now, int cpu,
>  		/* Efficiently calculate \sum (1..n_period) 1024*y^i */
>  		runnable_contrib = __compute_runnable_contrib(periods);
>  
> -		if (runnable)
> -			sa->runnable_avg_sum += (runnable_contrib * scale_cap)
> +		scaled_runnable_contrib = (runnable_contrib * scale_cap)
>  						>> SCHED_CAPACITY_SHIFT;
> +
> +		if (runnable)
> +			sa->runnable_avg_sum +=  scaled_runnable_contrib;
>  		if (running)
> -			sa->usage_avg_sum += runnable_contrib;
> +			sa->usage_avg_sum +=  scaled_runnable_contrib;
>  		sa->runnable_avg_period += runnable_contrib;
>  	}
>  
>  	/* Remainder of delta accrued against u_0` */
> +	scaled_delta = (delta * scale_cap) >> SCHED_CAPACITY_SHIFT;
> +
>  	if (runnable)
> -		sa->runnable_avg_sum += (delta * scale_cap)
> -				>> SCHED_CAPACITY_SHIFT;
> +		sa->runnable_avg_sum += scaled_delta;
>  	if (running)
> -		sa->usage_avg_sum += delta;
> +		sa->usage_avg_sum += scaled_delta;
>  	sa->runnable_avg_period += delta;
>  
>  	return decayed;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Morten Rasmussen Sept. 23, 2014, 1:35 p.m. UTC | #2
On Mon, Sep 22, 2014 at 06:13:46PM +0100, bsegall@google.com wrote:
> Morten Rasmussen <morten.rasmussen@arm.com> writes:
> 
> > Apply scale-invariance correction factor to usage tracking as well.
> 
> It seems like it would make more sense to order the patches as first the
> usage tracking and then all of the scale-invariance together, or perhaps
> to just fold this into the usage tracking patch.

Makes sense. I don't mind reordering the patches. Vincent has already
got some of the usage bits in his patch set, so I will have to rework
the usage patches anyway if Peter decides to take the rest of Vincent's
patch set.

Morten
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Peter Zijlstra Oct. 2, 2014, 9:04 p.m. UTC | #3
On Tue, Sep 23, 2014 at 02:35:03PM +0100, Morten Rasmussen wrote:
> On Mon, Sep 22, 2014 at 06:13:46PM +0100, bsegall@google.com wrote:
> > Morten Rasmussen <morten.rasmussen@arm.com> writes:
> > 
> > > Apply scale-invariance correction factor to usage tracking as well.
> > 
> > It seems like it would make more sense to order the patches as first the
> > usage tracking and then all of the scale-invariance together, or perhaps
> > to just fold this into the usage tracking patch.
> 
> Makes sense. I don't mind reordering the patches. Vincent has already
> got some of the usage bits in his patch set, so I will have to rework
> the usage patches anyway if Peter decides to take the rest of Vincent's
> patch set.

Yes, please reorder. I'll try and get back to wrapping brain around the
rest of Vincent's patches. Had to put that on hold to avoid getting
burried under incoming bits.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
diff mbox

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d8a8c83..c7aa8c1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2302,9 +2302,9 @@  static __always_inline int __update_entity_runnable_avg(u64 now, int cpu,
 							int runnable,
 							int running)
 {
-	u64 delta, periods;
-	u32 runnable_contrib;
-	int delta_w, decayed = 0;
+	u64 delta, scaled_delta, periods;
+	u32 runnable_contrib, scaled_runnable_contrib;
+	int delta_w, scaled_delta_w, decayed = 0;
 	u32 scale_cap = arch_scale_load_capacity(cpu);
 
 	delta = now - sa->last_runnable_update;
@@ -2339,11 +2339,12 @@  static __always_inline int __update_entity_runnable_avg(u64 now, int cpu,
 		 */
 		delta_w = 1024 - delta_w;
 
+		scaled_delta_w = (delta_w * scale_cap) >> SCHED_CAPACITY_SHIFT;
+
 		if (runnable)
-			sa->runnable_avg_sum += (delta_w * scale_cap)
-					>> SCHED_CAPACITY_SHIFT;
+			sa->runnable_avg_sum += scaled_delta_w;
 		if (running)
-			sa->usage_avg_sum += delta_w;
+			sa->usage_avg_sum += scaled_delta_w;
 		sa->runnable_avg_period += delta_w;
 
 		delta -= delta_w;
@@ -2361,20 +2362,23 @@  static __always_inline int __update_entity_runnable_avg(u64 now, int cpu,
 		/* Efficiently calculate \sum (1..n_period) 1024*y^i */
 		runnable_contrib = __compute_runnable_contrib(periods);
 
-		if (runnable)
-			sa->runnable_avg_sum += (runnable_contrib * scale_cap)
+		scaled_runnable_contrib = (runnable_contrib * scale_cap)
 						>> SCHED_CAPACITY_SHIFT;
+
+		if (runnable)
+			sa->runnable_avg_sum +=  scaled_runnable_contrib;
 		if (running)
-			sa->usage_avg_sum += runnable_contrib;
+			sa->usage_avg_sum +=  scaled_runnable_contrib;
 		sa->runnable_avg_period += runnable_contrib;
 	}
 
 	/* Remainder of delta accrued against u_0` */
+	scaled_delta = (delta * scale_cap) >> SCHED_CAPACITY_SHIFT;
+
 	if (runnable)
-		sa->runnable_avg_sum += (delta * scale_cap)
-				>> SCHED_CAPACITY_SHIFT;
+		sa->runnable_avg_sum += scaled_delta;
 	if (running)
-		sa->usage_avg_sum += delta;
+		sa->usage_avg_sum += scaled_delta;
 	sa->runnable_avg_period += delta;
 
 	return decayed;