diff mbox series

[v9,2/8] sched/core: Streamlining calls to task_rq_unlock()

Message ID 20190719140000.31694-3-juri.lelli@redhat.com
State New
Headers show
Series [v9,1/8] sched/topology: Adding function partition_sched_domains_locked() | expand

Commit Message

Juri Lelli July 19, 2019, 1:59 p.m. UTC
From: Mathieu Poirier <mathieu.poirier@linaro.org>


Calls to task_rq_unlock() are done several times in function
__sched_setscheduler().  This is fine when only the rq lock needs to be
handled but not so much when other locks come into play.

This patch streamlines the release of the rq lock so that only one
location need to be modified when dealing with more than one lock.

No change of functionality is introduced by this patch.

Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>

Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Acked-by: Tejun Heo <tj@kernel.org>

---
 kernel/sched/core.c | 24 ++++++++++++++----------
 1 file changed, 14 insertions(+), 10 deletions(-)

-- 
2.17.2

Comments

Dietmar Eggemann July 22, 2019, 8:21 a.m. UTC | #1
On 7/19/19 3:59 PM, Juri Lelli wrote:
> From: Mathieu Poirier <mathieu.poirier@linaro.org>


[...]

> @@ -4269,8 +4269,8 @@ static int __sched_setscheduler(struct task_struct *p,

>  			 */

>  			if (!cpumask_subset(span, &p->cpus_allowed) ||


This doesn't apply cleanly on v5.3-rc1 anymore due to commit
3bd3706251ee ("sched/core: Provide a pointer to the valid CPU mask").

>  			    rq->rd->dl_bw.bw == 0) {

> -				task_rq_unlock(rq, p, &rf);

> -				return -EPERM;

> +				retval = -EPERM;

> +				goto unlock;

>  			}

>  		}

>  #endif


[...]
Juri Lelli July 22, 2019, 8:32 a.m. UTC | #2
On 22/07/19 10:21, Dietmar Eggemann wrote:
> On 7/19/19 3:59 PM, Juri Lelli wrote:

> > From: Mathieu Poirier <mathieu.poirier@linaro.org>

> 

> [...]

> 

> > @@ -4269,8 +4269,8 @@ static int __sched_setscheduler(struct task_struct *p,

> >  			 */

> >  			if (!cpumask_subset(span, &p->cpus_allowed) ||

> 

> This doesn't apply cleanly on v5.3-rc1 anymore due to commit

> 3bd3706251ee ("sched/core: Provide a pointer to the valid CPU mask").

> 

> >  			    rq->rd->dl_bw.bw == 0) {

> > -				task_rq_unlock(rq, p, &rf);

> > -				return -EPERM;

> > +				retval = -EPERM;

> > +				goto unlock;

> >  			}

> >  		}

> >  #endif


Thanks for reporting. The set is based on cgroup/for-next (as of last
week), though. I can of course rebase on tip/sched/core or mainline if
needed.

Best,

Juri
Dietmar Eggemann July 22, 2019, 9:08 a.m. UTC | #3
On 7/22/19 10:32 AM, Juri Lelli wrote:
> On 22/07/19 10:21, Dietmar Eggemann wrote:

>> On 7/19/19 3:59 PM, Juri Lelli wrote:

>>> From: Mathieu Poirier <mathieu.poirier@linaro.org>

>>

>> [...]

>>

>>> @@ -4269,8 +4269,8 @@ static int __sched_setscheduler(struct task_struct *p,

>>>  			 */

>>>  			if (!cpumask_subset(span, &p->cpus_allowed) ||

>>

>> This doesn't apply cleanly on v5.3-rc1 anymore due to commit

>> 3bd3706251ee ("sched/core: Provide a pointer to the valid CPU mask").

>>

>>>  			    rq->rd->dl_bw.bw == 0) {

>>> -				task_rq_unlock(rq, p, &rf);

>>> -				return -EPERM;

>>> +				retval = -EPERM;

>>> +				goto unlock;

>>>  			}

>>>  		}

>>>  #endif

> 

> Thanks for reporting. The set is based on cgroup/for-next (as of last

> week), though. I can of course rebase on tip/sched/core or mainline if

> needed.


Not sure, there is another little issue on 3/8 since uclamp is in
v5.3-rc1 as well commit 69842cba9ace8 ("sched/uclamp: Add CPU's clamp
buckets refcounting").
Peter Zijlstra July 23, 2019, 10:31 a.m. UTC | #4
On Mon, Jul 22, 2019 at 10:32:14AM +0200, Juri Lelli wrote:

> Thanks for reporting. The set is based on cgroup/for-next (as of last

> week), though. I can of course rebase on tip/sched/core or mainline if

> needed.


TJ; I would like to take these patches through the scheduler tree if you
don't mind. Afaict there's no real conflict vs cgroup/for-next (I
applied the patches and then did a pull of cgroup/for-next which
finished without complaints).
Peter Zijlstra July 23, 2019, 10:32 a.m. UTC | #5
On Mon, Jul 22, 2019 at 11:08:17AM +0200, Dietmar Eggemann wrote:

> Not sure, there is another little issue on 3/8 since uclamp is in

> v5.3-rc1 as well commit 69842cba9ace8 ("sched/uclamp: Add CPU's clamp

> buckets refcounting").


Also, 8/8, but all conflicts are trivial and I've fixed them up.
Tejun Heo July 23, 2019, 1:11 p.m. UTC | #6
On Tue, Jul 23, 2019 at 12:31:31PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 22, 2019 at 10:32:14AM +0200, Juri Lelli wrote:

> 

> > Thanks for reporting. The set is based on cgroup/for-next (as of last

> > week), though. I can of course rebase on tip/sched/core or mainline if

> > needed.

> 

> TJ; I would like to take these patches through the scheduler tree if you

> don't mind. Afaict there's no real conflict vs cgroup/for-next (I

> applied the patches and then did a pull of cgroup/for-next which

> finished without complaints).


Yeah, for sure, please go ahead.

Thanks.

-- 
tejun
Juri Lelli July 23, 2019, 2:58 p.m. UTC | #7
On 23/07/19 06:11, Tejun Heo wrote:
> On Tue, Jul 23, 2019 at 12:31:31PM +0200, Peter Zijlstra wrote:

> > On Mon, Jul 22, 2019 at 10:32:14AM +0200, Juri Lelli wrote:

> > 

> > > Thanks for reporting. The set is based on cgroup/for-next (as of last

> > > week), though. I can of course rebase on tip/sched/core or mainline if

> > > needed.

> > 

> > TJ; I would like to take these patches through the scheduler tree if you

> > don't mind. Afaict there's no real conflict vs cgroup/for-next (I

> > applied the patches and then did a pull of cgroup/for-next which

> > finished without complaints).

> 

> Yeah, for sure, please go ahead.

> 

> Thanks.


Thanks!
diff mbox series

Patch

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 874c427742a9..acd6a9fe85bc 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4222,8 +4222,8 @@  static int __sched_setscheduler(struct task_struct *p,
 	 * Changing the policy of the stop threads its a very bad idea:
 	 */
 	if (p == rq->stop) {
-		task_rq_unlock(rq, p, &rf);
-		return -EINVAL;
+		retval = -EINVAL;
+		goto unlock;
 	}
 
 	/*
@@ -4239,8 +4239,8 @@  static int __sched_setscheduler(struct task_struct *p,
 			goto change;
 
 		p->sched_reset_on_fork = reset_on_fork;
-		task_rq_unlock(rq, p, &rf);
-		return 0;
+		retval = 0;
+		goto unlock;
 	}
 change:
 
@@ -4253,8 +4253,8 @@  static int __sched_setscheduler(struct task_struct *p,
 		if (rt_bandwidth_enabled() && rt_policy(policy) &&
 				task_group(p)->rt_bandwidth.rt_runtime == 0 &&
 				!task_group_is_autogroup(task_group(p))) {
-			task_rq_unlock(rq, p, &rf);
-			return -EPERM;
+			retval = -EPERM;
+			goto unlock;
 		}
 #endif
 #ifdef CONFIG_SMP
@@ -4269,8 +4269,8 @@  static int __sched_setscheduler(struct task_struct *p,
 			 */
 			if (!cpumask_subset(span, &p->cpus_allowed) ||
 			    rq->rd->dl_bw.bw == 0) {
-				task_rq_unlock(rq, p, &rf);
-				return -EPERM;
+				retval = -EPERM;
+				goto unlock;
 			}
 		}
 #endif
@@ -4289,8 +4289,8 @@  static int __sched_setscheduler(struct task_struct *p,
 	 * is available.
 	 */
 	if ((dl_policy(policy) || dl_task(p)) && sched_dl_overflow(p, policy, attr)) {
-		task_rq_unlock(rq, p, &rf);
-		return -EBUSY;
+		retval = -EBUSY;
+		goto unlock;
 	}
 
 	p->sched_reset_on_fork = reset_on_fork;
@@ -4346,6 +4346,10 @@  static int __sched_setscheduler(struct task_struct *p,
 	preempt_enable();
 
 	return 0;
+
+unlock:
+	task_rq_unlock(rq, p, &rf);
+	return retval;
 }
 
 static int _sched_setscheduler(struct task_struct *p, int policy,