diff mbox

[1/2] sched: Optimize build_sched_domains() for saving first SD node for a cpu

Message ID CAKohpokGrgFOtOT6Y3e4MOJBUQWT4FG4Mk-F6_M0V-pEEf3KYw@mail.gmail.com
State Accepted
Headers show

Commit Message

Viresh Kumar June 5, 2013, 5:07 a.m. UTC
On 5 June 2013 10:12, Michael Wang <wangyun@linux.vnet.ibm.com> wrote:
> Hi, Viresh
>
> On 06/04/2013 07:20 PM, Viresh Kumar wrote:
> [snip]
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index 58453b8..638f6cb 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -6533,16 +6533,13 @@ static int build_sched_domains(const struct cpumask *cpu_map,
>>               sd = NULL;
>>               for (tl = sched_domain_topology; tl->init; tl++) {
>>                       sd = build_sched_domain(tl, &d, cpu_map, attr, sd, i);
>> +                     if (!*per_cpu_ptr(d.sd, i))
>
> What about:
>                         if (tl == sched_domain_topology)
>
> It cost less than per_cpu_ptr(), isn't it?

How can I miss it.. Obviously its better :)

See if below one looks better (Attached too in case gmail screws up
my mail)..

--------x-------------x------------------

From: Viresh Kumar <viresh.kumar@linaro.org>
Date: Tue, 4 Jun 2013 15:41:15 +0530
Subject: [PATCH] sched: Optimize build_sched_domains() for saving first SD
 node for a cpu

We are saving first scheduling domain for a cpu in build_sched_domains() by
iterating over the nested sd->child list. We don't actually need to do it this
way.

tl will be equal to sched_domain_topology for the first iteration and so we can
set *per_cpu_ptr(d.sd, i) based on that.  So, save pointer to first SD while
running the iteration loop over tl's.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
---
 kernel/sched/core.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

 	/* Build the groups for the domains */

Comments

Michael Wang June 5, 2013, 5:26 a.m. UTC | #1
On 06/05/2013 01:07 PM, Viresh Kumar wrote:
> On 5 June 2013 10:12, Michael Wang <wangyun@linux.vnet.ibm.com> wrote:
>> Hi, Viresh
>>
>> On 06/04/2013 07:20 PM, Viresh Kumar wrote:
>> [snip]
>>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>>> index 58453b8..638f6cb 100644
>>> --- a/kernel/sched/core.c
>>> +++ b/kernel/sched/core.c
>>> @@ -6533,16 +6533,13 @@ static int build_sched_domains(const struct cpumask *cpu_map,
>>>               sd = NULL;
>>>               for (tl = sched_domain_topology; tl->init; tl++) {
>>>                       sd = build_sched_domain(tl, &d, cpu_map, attr, sd, i);
>>> +                     if (!*per_cpu_ptr(d.sd, i))
>>
>> What about:
>>                         if (tl == sched_domain_topology)
>>
>> It cost less than per_cpu_ptr(), isn't it?
> 
> How can I miss it.. Obviously its better :)
> 
> See if below one looks better (Attached too in case gmail screws up
> my mail)..

Looks good to me :)

Regards,
Michael Wang

> 
> --------x-------------x------------------
> 
> From: Viresh Kumar <viresh.kumar@linaro.org>
> Date: Tue, 4 Jun 2013 15:41:15 +0530
> Subject: [PATCH] sched: Optimize build_sched_domains() for saving first SD
>  node for a cpu
> 
> We are saving first scheduling domain for a cpu in build_sched_domains() by
> iterating over the nested sd->child list. We don't actually need to do it this
> way.
> 
> tl will be equal to sched_domain_topology for the first iteration and so we can
> set *per_cpu_ptr(d.sd, i) based on that.  So, save pointer to first SD while
> running the iteration loop over tl's.
> 
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> ---
>  kernel/sched/core.c | 7 ++-----
>  1 file changed, 2 insertions(+), 5 deletions(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 58453b8..08a27be 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6533,16 +6533,13 @@ static int build_sched_domains(const struct
> cpumask *cpu_map,
>  		sd = NULL;
>  		for (tl = sched_domain_topology; tl->init; tl++) {
>  			sd = build_sched_domain(tl, &d, cpu_map, attr, sd, i);
> +			if (tl == sched_domain_topology)
> +				*per_cpu_ptr(d.sd, i) = sd;
>  			if (tl->flags & SDTL_OVERLAP || sched_feat(FORCE_SD_OVERLAP))
>  				sd->flags |= SD_OVERLAP;
>  			if (cpumask_equal(cpu_map, sched_domain_span(sd)))
>  				break;
>  		}
> -
> -		while (sd->child)
> -			sd = sd->child;
> -
> -		*per_cpu_ptr(d.sd, i) = sd;
>  	}
> 
>  	/* Build the groups for the domains */
>
Peter Zijlstra June 5, 2013, 11:04 a.m. UTC | #2
On Wed, Jun 05, 2013 at 10:37:29AM +0530, Viresh Kumar wrote:
> On 5 June 2013 10:12, Michael Wang <wangyun@linux.vnet.ibm.com> wrote:
> > Hi, Viresh
> >
> > On 06/04/2013 07:20 PM, Viresh Kumar wrote:
> > [snip]
> >> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> >> index 58453b8..638f6cb 100644
> >> --- a/kernel/sched/core.c
> >> +++ b/kernel/sched/core.c
> >> @@ -6533,16 +6533,13 @@ static int build_sched_domains(const struct cpumask *cpu_map,
> >>               sd = NULL;
> >>               for (tl = sched_domain_topology; tl->init; tl++) {
> >>                       sd = build_sched_domain(tl, &d, cpu_map, attr, sd, i);
> >> +                     if (!*per_cpu_ptr(d.sd, i))
> >
> > What about:
> >                         if (tl == sched_domain_topology)
> >
> > It cost less than per_cpu_ptr(), isn't it?
> 
> How can I miss it.. Obviously its better :)
> 
> See if below one looks better (Attached too in case gmail screws up
> my mail)..
> 
> --------x-------------x------------------
> 
> From: Viresh Kumar <viresh.kumar@linaro.org>
> Date: Tue, 4 Jun 2013 15:41:15 +0530
> Subject: [PATCH] sched: Optimize build_sched_domains() for saving first SD
>  node for a cpu
> 
> We are saving first scheduling domain for a cpu in build_sched_domains() by
> iterating over the nested sd->child list. We don't actually need to do it this
> way.
> 
> tl will be equal to sched_domain_topology for the first iteration and so we can
> set *per_cpu_ptr(d.sd, i) based on that.  So, save pointer to first SD while
> running the iteration loop over tl's.
> 
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> ---
>  kernel/sched/core.c | 7 ++-----
>  1 file changed, 2 insertions(+), 5 deletions(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 58453b8..08a27be 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6533,16 +6533,13 @@ static int build_sched_domains(const struct
> cpumask *cpu_map,

Applying patch
patches/viresh_kumar-_patch__sched-optimize_build_sched_domains_for_saving_first_sd.patch
patching file kernel/sched/core.c
patch: **** malformed patch at line 56: cpumask *cpu_map,
Viresh Kumar June 5, 2013, 12:42 p.m. UTC | #3
On 5 June 2013 16:34, Peter Zijlstra <peterz@infradead.org> wrote:
> On Wed, Jun 05, 2013 at 10:37:29AM +0530, Viresh Kumar wrote:

>> See if below one looks better (Attached too in case gmail screws up
>> my mail)..

> Applying patch
> patches/viresh_kumar-_patch__sched-optimize_build_sched_domains_for_saving_first_sd.patch
> patching file kernel/sched/core.c
> patch: **** malformed patch at line 56: cpumask *cpu_map,

You tried to apply patch from mail or attachment? I requested to
pick attachment as gmail's copy-paste screws up patches.

I rebased it over tip/master now:

2bf6874 Merge branch 'x86/cleanups'

Patches are attached now.
Peter Zijlstra June 5, 2013, 1:03 p.m. UTC | #4
On Wed, Jun 05, 2013 at 06:12:15PM +0530, Viresh Kumar wrote:
> On 5 June 2013 16:34, Peter Zijlstra <peterz@infradead.org> wrote:
> > On Wed, Jun 05, 2013 at 10:37:29AM +0530, Viresh Kumar wrote:
> 
> >> See if below one looks better (Attached too in case gmail screws up
> >> my mail)..
> 
> > Applying patch
> > patches/viresh_kumar-_patch__sched-optimize_build_sched_domains_for_saving_first_sd.patch
> > patching file kernel/sched/core.c
> > patch: **** malformed patch at line 56: cpumask *cpu_map,
> 
> You tried to apply patch from mail or attachment? I requested to
> pick attachment as gmail's copy-paste screws up patches.
> 

From email; that's what my mailer is scripted for. TBH I didn't even
notice the attachment.

I'll go prod at the attachment.
diff mbox

Patch

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 58453b8..08a27be 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6533,16 +6533,13 @@  static int build_sched_domains(const struct
cpumask *cpu_map,
 		sd = NULL;
 		for (tl = sched_domain_topology; tl->init; tl++) {
 			sd = build_sched_domain(tl, &d, cpu_map, attr, sd, i);
+			if (tl == sched_domain_topology)
+				*per_cpu_ptr(d.sd, i) = sd;
 			if (tl->flags & SDTL_OVERLAP || sched_feat(FORCE_SD_OVERLAP))
 				sd->flags |= SD_OVERLAP;
 			if (cpumask_equal(cpu_map, sched_domain_span(sd)))
 				break;
 		}
-
-		while (sd->child)
-			sd = sd->child;
-
-		*per_cpu_ptr(d.sd, i) = sd;
 	}