diff mbox series

[V2,1/7] sched/topology: Adding function partition_sched_domains_locked()

Message ID 1517503869-3179-2-git-send-email-mathieu.poirier@linaro.org
State Superseded
Headers show
Series sched/deadline: fix cpusets bandwidth accounting | expand

Commit Message

Mathieu Poirier Feb. 1, 2018, 4:51 p.m. UTC
Introducing function partition_sched_domains_locked() by taking
the mutex locking code out of the original function.  That way
the work done by partition_sched_domains_locked() can be reused
without dropping the mutex lock.

This patch doesn't change the functionality provided by the
original code.

Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>

---
 include/linux/sched/topology.h |  9 +++++++++
 kernel/sched/topology.c        | 17 +++++++++++++----
 2 files changed, 22 insertions(+), 4 deletions(-)

-- 
2.7.4

Comments

Juri Lelli Feb. 2, 2018, 10:19 a.m. UTC | #1
Hi Mathieu,

On 01/02/18 09:51, Mathieu Poirier wrote:
> Introducing function partition_sched_domains_locked() by taking

> the mutex locking code out of the original function.  That way

> the work done by partition_sched_domains_locked() can be reused

> without dropping the mutex lock.

> 

> This patch doesn't change the functionality provided by the

> original code.

> 

> Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>

> ---


[...]

> +/*

> + * Call with hotplug lock held


Is this the one that we can actually check if it's locked with

lockdep_assert_cpus_held()

?

> + */

> +void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],

> +			     struct sched_domain_attr *dattr_new)

> +{

> +	mutex_lock(&sched_domains_mutex);

> +	partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);

>  	mutex_unlock(&sched_domains_mutex);

>  }


Best,

- Juri
Mathieu Poirier Feb. 5, 2018, 6:11 p.m. UTC | #2
On 2 February 2018 at 03:19, Juri Lelli <juri.lelli@redhat.com> wrote:
> Hi Mathieu,

>

> On 01/02/18 09:51, Mathieu Poirier wrote:

>> Introducing function partition_sched_domains_locked() by taking

>> the mutex locking code out of the original function.  That way

>> the work done by partition_sched_domains_locked() can be reused

>> without dropping the mutex lock.

>>

>> This patch doesn't change the functionality provided by the

>> original code.

>>

>> Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>

>> ---

>

> [...]

>

>> +/*

>> + * Call with hotplug lock held

>

> Is this the one that we can actually check if it's locked with

>

> lockdep_assert_cpus_held()

>

> ?


Hi Juri,

You are correct - we could call lockdep_assert_cpus_held() but in my
opinion it would be in a separate patch and probably outside the scope
of this work.  The sole purpose of this patch is to get the
locking/unlocking operations of mutex sched_domains_mutex out of
function partition_sched_domains_locked().

Mathieu

>

>> + */

>> +void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],

>> +                          struct sched_domain_attr *dattr_new)

>> +{

>> +     mutex_lock(&sched_domains_mutex);

>> +     partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);

>>       mutex_unlock(&sched_domains_mutex);

>>  }

>

> Best,

>

> - Juri
Juri Lelli Feb. 6, 2018, 7:42 a.m. UTC | #3
On 05/02/18 11:11, Mathieu Poirier wrote:
> On 2 February 2018 at 03:19, Juri Lelli <juri.lelli@redhat.com> wrote:

> > Hi Mathieu,

> >

> > On 01/02/18 09:51, Mathieu Poirier wrote:

> >> Introducing function partition_sched_domains_locked() by taking

> >> the mutex locking code out of the original function.  That way

> >> the work done by partition_sched_domains_locked() can be reused

> >> without dropping the mutex lock.

> >>

> >> This patch doesn't change the functionality provided by the

> >> original code.

> >>

> >> Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>

> >> ---

> >

> > [...]

> >

> >> +/*

> >> + * Call with hotplug lock held

> >

> > Is this the one that we can actually check if it's locked with

> >

> > lockdep_assert_cpus_held()

> >

> > ?

> 

> Hi Juri,

> 

> You are correct - we could call lockdep_assert_cpus_held() but in my

> opinion it would be in a separate patch and probably outside the scope

> of this work.  The sole purpose of this patch is to get the

> locking/unlocking operations of mutex sched_domains_mutex out of

> function partition_sched_domains_locked().


Fair enough. I just thought though that, since you are adding the comment
above, we could as well add an explicit check for hotplug lock.

Best,

- Juri
diff mbox series

Patch

diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index cf257c2e728d..118141c3216a 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -162,6 +162,9 @@  static inline struct cpumask *sched_domain_span(struct sched_domain *sd)
 	return to_cpumask(sd->span);
 }
 
+extern void partition_sched_domains_locked(int ndoms_new,
+					   cpumask_var_t doms_new[],
+					   struct sched_domain_attr *dattr_new);
 extern void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
 				    struct sched_domain_attr *dattr_new);
 
@@ -207,6 +210,12 @@  extern void set_sched_topology(struct sched_domain_topology_level *tl);
 struct sched_domain_attr;
 
 static inline void
+partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[],
+			       struct sched_domain_attr *dattr_new)
+{
+}
+
+static inline void
 partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
 			struct sched_domain_attr *dattr_new)
 {
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 41bf2a531ee5..892e1f9c78f0 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1842,15 +1842,15 @@  static int dattrs_equal(struct sched_domain_attr *cur, int idx_cur,
  * ndoms_new == 0 is a special case for destroying existing domains,
  * and it will not create the default domain.
  *
- * Call with hotplug lock held
+ * Call with hotplug lock and sched_domains_mutex held
  */
-void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
-			     struct sched_domain_attr *dattr_new)
+void partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[],
+				    struct sched_domain_attr *dattr_new)
 {
 	int i, j, n;
 	int new_topology;
 
-	mutex_lock(&sched_domains_mutex);
+	lockdep_assert_held(&sched_domains_mutex);
 
 	/* Always unregister in case we don't destroy any domains: */
 	unregister_sched_domain_sysctl();
@@ -1915,7 +1915,16 @@  void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
 	ndoms_cur = ndoms_new;
 
 	register_sched_domain_sysctl();
+}
 
+/*
+ * Call with hotplug lock held
+ */
+void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
+			     struct sched_domain_attr *dattr_new)
+{
+	mutex_lock(&sched_domains_mutex);
+	partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
 	mutex_unlock(&sched_domains_mutex);
 }