diff mbox

[1/2] arm64: topology: Tell the scheduler about the relative power of cores

Message ID 1400545421-28067-1-git-send-email-broonie@kernel.org
State New
Headers show

Commit Message

Mark Brown May 20, 2014, 12:23 a.m. UTC
From: Mark Brown <broonie@linaro.org>

In heterogeneous systems like big.LITTLE systems the scheduler will be
able to make better use of the available cores if we provide power numbers
to it indicating their relative performance. Do this by parsing the CPU
nodes in the DT.

This code currently has no effect as no information on the relative
performance of the cores is provided.

Signed-off-by: Mark Brown <broonie@linaro.org>
---
 arch/arm64/kernel/topology.c | 153 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 153 insertions(+)

Comments

Catalin Marinas May 20, 2014, 5:37 p.m. UTC | #1
On Tue, May 20, 2014 at 01:23:40AM +0100, Mark Brown wrote:
> In heterogeneous systems like big.LITTLE systems the scheduler will be
> able to make better use of the available cores if we provide power numbers
> to it indicating their relative performance. Do this by parsing the CPU
> nodes in the DT.

Last time we discussed these two patches, my understanding was that the
mainline scheduler doesn't behave any better on big.LITTLE with this
additional information, unless you also have additional out of tree b.L
MP patches. Vincent is also cleaning up some of the cpu_power usage in
the scheduler.

So unless there are clear benefits in providing such information to the
mainline scheduler, I don't plan to merge them for the time being (I'm
also not convinced of the numbers in the second patch, they need some
benchmarking on real hardware).
Mark Brown May 20, 2014, 6:45 p.m. UTC | #2
On Tue, May 20, 2014 at 06:37:27PM +0100, Catalin Marinas wrote:
> On Tue, May 20, 2014 at 01:23:40AM +0100, Mark Brown wrote:
> > In heterogeneous systems like big.LITTLE systems the scheduler will be
> > able to make better use of the available cores if we provide power numbers
> > to it indicating their relative performance. Do this by parsing the CPU
> > nodes in the DT.

> Last time we discussed these two patches, my understanding was that the
> mainline scheduler doesn't behave any better on big.LITTLE with this
> additional information, unless you also have additional out of tree b.L

That wasn't my recollection and I'm certainly seeing code in the current
scheduler which appears to make use of the information though I cannot
claim any depth of understanding.  It's possible you're recalling some
offline/other conversation with people more involved in the scheduler
though?  

The issue I'm aware of with this is the one called out in the changelog
- it could do odd things with loads that try to saturate all cores with
one thread per core, but then I'm not clear that such loads are going to
work effectively on a big.LITTLE system anyway (either some jobs get
packed on a big core or some jobs run a lot slower on little cores).  My
understanding was that more typical workloads with load spread over more
tasks should tend to do better with this.

The biggest win with the out of tree patches is that they have some idea
of the power tradeoffs with the two clusters and actively push to
exploit them.

> MP patches. Vincent is also cleaning up some of the cpu_power usage in
> the scheduler.

> So unless there are clear benefits in providing such information to the
> mainline scheduler, I don't plan to merge them for the time being (I'm

The main thing I'm seeing here is consistency between ARMv8 and ARMv7
big.LITTLE implementations - there's readily available ARMv7 big.LITTLE
hardware out there so I would expect that much of the work on the
scheduler will continue to focus on ARMv7.  Keeping ARMv8 consistent
with ARMv7 should avoid nasty surprises there.

Unless I'm completely misreading the code I'm looking at in fair.c I
suspect I could come up with benchmarks that showed an impact but I'm
not sure that's too useful unless there's something you're particularly
interested in, this sort of thing always has an element of taste.

It would be nice to at least merge the parsing code in the first patch,
without the performance numbers in the table from the second patch it
won't have any practical impact but it's less diff to carry around and
easier for people to experiment with when doing in tree work (including
possibly setting other parameters as a result of parsing).  Does that
seem reasonable?

> also not convinced of the numbers in the second patch, they need some
> benchmarking on real hardware).

I'm sure we can all have confidence in the published performance
numbers!  More seriously as I've said before I think you're overthinking
the importance of their accuracy, though as that change is so small it's
a bit less of a concern for diff to carry about.
Nicolas Pitre May 20, 2014, 9:31 p.m. UTC | #3
On Tue, 20 May 2014, Catalin Marinas wrote:

> On Tue, May 20, 2014 at 01:23:40AM +0100, Mark Brown wrote:
> > In heterogeneous systems like big.LITTLE systems the scheduler will be
> > able to make better use of the available cores if we provide power numbers
> > to it indicating their relative performance. Do this by parsing the CPU
> > nodes in the DT.
> 
> Last time we discussed these two patches, my understanding was that the
> mainline scheduler doesn't behave any better on big.LITTLE with this
> additional information, unless you also have additional out of tree b.L
> MP patches. Vincent is also cleaning up some of the cpu_power usage in
> the scheduler.
> 
> So unless there are clear benefits in providing such information to the
> mainline scheduler, I don't plan to merge them for the time being (I'm
> also not convinced of the numbers in the second patch, they need some
> benchmarking on real hardware).

We are indeed in the process of working out how to use this information 
in the scheduler, submitting patches, etc.  Thing is, we risk seeing the 
scheduler maintainers saying: "unless there are clear users of those 
enhancements to the mainline scheduler, we don't plan to merge them for 
the time being."

It might be more productive to merge _something_ first, and doing so on 
the architecture side is certainly the least intrusive initial move.

As to the numbers themselves... I suspect it will be hard to come up 
with a benchmark that everyone will agree with.  Those numbers certainly 
can be refined later when the scheduler side has evolved and more tests 
have been performed.


Nicolas
Catalin Marinas May 22, 2014, 10:35 a.m. UTC | #4
On Tue, May 20, 2014 at 10:31:39PM +0100, Nicolas Pitre wrote:
> On Tue, 20 May 2014, Catalin Marinas wrote:
> 
> > On Tue, May 20, 2014 at 01:23:40AM +0100, Mark Brown wrote:
> > > In heterogeneous systems like big.LITTLE systems the scheduler will be
> > > able to make better use of the available cores if we provide power numbers
> > > to it indicating their relative performance. Do this by parsing the CPU
> > > nodes in the DT.
> > 
> > Last time we discussed these two patches, my understanding was that the
> > mainline scheduler doesn't behave any better on big.LITTLE with this
> > additional information, unless you also have additional out of tree b.L
> > MP patches. Vincent is also cleaning up some of the cpu_power usage in
> > the scheduler.
> > 
> > So unless there are clear benefits in providing such information to the
> > mainline scheduler, I don't plan to merge them for the time being (I'm
> > also not convinced of the numbers in the second patch, they need some
> > benchmarking on real hardware).
> 
> We are indeed in the process of working out how to use this information 
> in the scheduler, submitting patches, etc.  Thing is, we risk seeing the 
> scheduler maintainers saying: "unless there are clear users of those 
> enhancements to the mainline scheduler, we don't plan to merge them for 
> the time being."

I'm pretty sure they know the big.LITTLE story and we can reiterate it's
required for arm64 as well (though not sure they see it as different
from arm in this context).

I really appreciate the work you and Vincent are doing to clarify the
cpu_power usage in the scheduler. But for the time being, its only use
seems to be for SMT and rather problematic for big.LITTLE
(https://lkml.org/lkml/2014/3/28/197).

I also think the arch_scale_freq_power() name is wrong in the maximum
capacity context. What if we decide that we need frequency invariant
task load you realise that you actually need arch_scale_freq_power() to
vary with the CPU frequency for normalisation rather than the big.LITTLE
performance differences? But I see you are already trying to clean this
up as well (https://lkml.org/lkml/2014/5/14/625), which is good, but
let's wait until these patches go in (and there is already an user,
though the behaviour isn't correct until we get Vincent's patches in).

> It might be more productive to merge _something_ first, and doing so on 
> the architecture side is certainly the least intrusive initial move.

You are already renaming the arm arch_scale_freq_power(), why would you
want to create more work for you by having to re-write parts of arm64 as
well once you get to a consensus on scheduler changes?

Just to be clear, I'm not against Mark's patch but I don't see any value
in pushing it into mainline now, given that it is likely to be changed
in the future following the work you and Vincent are doing.
Mark Brown May 22, 2014, 3:18 p.m. UTC | #5
On Thu, May 22, 2014 at 11:35:51AM +0100, Catalin Marinas wrote:
> On Tue, May 20, 2014 at 10:31:39PM +0100, Nicolas Pitre wrote:

> > It might be more productive to merge _something_ first, and doing so on 
> > the architecture side is certainly the least intrusive initial move.

> You are already renaming the arm arch_scale_freq_power(), why would you
> want to create more work for you by having to re-write parts of arm64 as
> well once you get to a consensus on scheduler changes?

> Just to be clear, I'm not against Mark's patch but I don't see any value
> in pushing it into mainline now, given that it is likely to be changed
> in the future following the work you and Vincent are doing.

Having the code in both ARMv7 and ARMv8 would mean that updating ARMv8
is mostly just typing rather than thinking which should make life
easier.  Any pain involved in the update is going to be felt updating
ARMv7 anyway, the additional difficulty in making the same update on
ARMv8 should be low if the code is in sync since the thinking part
applies equally well to both.

Not having the code there means there's that little bit more out of tree
code to keep in sync between the two architectures for testing updates
and that any generic improvements which happen to get implemented
without requiring architecture updates will need code writing for ARMv8
anyway.  At least some of the code is going to remain no matter what
parameters we end up passing to the scheduler.

If you are trying to save effort I think it's better to keep the two
architectures in sync as far as possible.  One other thing we could do
is move all the actual parameter setting to a separate patch so at least
the bit where we work out what cores we've got could go in.
diff mbox

Patch

diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index 4c3b2511a6a1..08ce694e5078 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -19,11 +19,35 @@ 
 #include <linux/nodemask.h>
 #include <linux/of.h>
 #include <linux/sched.h>
+#include <linux/slab.h>
 
 #include <asm/cputype.h>
 #include <asm/smp_plat.h>
 #include <asm/topology.h>
 
+/*
+ * cpu power table
+ * This per cpu data structure describes the relative capacity of each core.
+ * On a heteregenous system, cores don't have the same computation capacity
+ * and we reflect that difference in the cpu_power field so the scheduler can
+ * take this difference into account during load balance. A per cpu structure
+ * is preferred because each CPU updates its own cpu_power field during the
+ * load balance except for idle cores. One idle core is selected to run the
+ * rebalance_domains for all idle cores and the cpu_power can be updated
+ * during this sequence.
+ */
+static DEFINE_PER_CPU(unsigned long, cpu_scale);
+
+unsigned long arch_scale_freq_power(struct sched_domain *sd, int cpu)
+{
+	return per_cpu(cpu_scale, cpu);
+}
+
+static void set_power_scale(unsigned int cpu, unsigned long power)
+{
+	per_cpu(cpu_scale, cpu) = power;
+}
+
 static int __init get_cpu_for_node(struct device_node *node)
 {
 	struct device_node *cpu_node;
@@ -162,6 +186,38 @@  static int __init parse_cluster(struct device_node *cluster, int depth)
 	return 0;
 }
 
+struct cpu_efficiency {
+	const char *compatible;
+	unsigned long efficiency;
+};
+
+/*
+ * Table of relative efficiency of each processors
+ * The efficiency value must fit in 20bit and the final
+ * cpu_scale value must be in the range
+ *   0 < cpu_scale < 3*SCHED_POWER_SCALE/2
+ * in order to return at most 1 when DIV_ROUND_CLOSEST
+ * is used to compute the capacity of a CPU.
+ * Processors that are not defined in the table,
+ * use the default SCHED_POWER_SCALE value for cpu_scale.
+ */
+static const struct cpu_efficiency table_efficiency[] = {
+	{ NULL, },
+};
+
+static unsigned long *__cpu_capacity;
+#define cpu_capacity(cpu)	__cpu_capacity[cpu]
+
+static unsigned long middle_capacity = 1;
+
+/*
+ * Iterate all CPUs' descriptor in DT and compute the efficiency
+ * (as per table_efficiency). Also calculate a middle efficiency
+ * as close as possible to  (max{eff_i} - min{eff_i}) / 2
+ * This is later used to scale the cpu_power field such that an
+ * 'average' CPU is of middle power. Also see the comments near
+ * table_efficiency[] and update_cpu_power().
+ */
 static int __init parse_dt_topology(void)
 {
 	struct device_node *cn, *map;
@@ -205,6 +261,91 @@  out:
 	return ret;
 }
 
+static void __init parse_dt_cpu_power(void)
+{
+	const struct cpu_efficiency *cpu_eff;
+	struct device_node *cn;
+	unsigned long min_capacity = ULONG_MAX;
+	unsigned long max_capacity = 0;
+	unsigned long capacity = 0;
+	int cpu;
+
+	__cpu_capacity = kcalloc(nr_cpu_ids, sizeof(*__cpu_capacity),
+				 GFP_NOWAIT);
+
+	for_each_possible_cpu(cpu) {
+		const u32 *rate;
+		int len;
+
+		/* Too early to use cpu->of_node */
+		cn = of_get_cpu_node(cpu, NULL);
+		if (!cn) {
+			pr_err("Missing device node for CPU %d\n", cpu);
+			continue;
+		}
+
+		for (cpu_eff = table_efficiency; cpu_eff->compatible; cpu_eff++)
+			if (of_device_is_compatible(cn, cpu_eff->compatible))
+				break;
+
+		if (cpu_eff->compatible == NULL) {
+			pr_warn("%s: Unknown CPU type\n", cn->full_name);
+			continue;
+		}
+
+		rate = of_get_property(cn, "clock-frequency", &len);
+		if (!rate || len != 4) {
+			pr_err("%s: Missing clock-frequency property\n",
+				cn->full_name);
+			continue;
+		}
+
+		capacity = ((be32_to_cpup(rate)) >> 20) * cpu_eff->efficiency;
+
+		/* Save min capacity of the system */
+		if (capacity < min_capacity)
+			min_capacity = capacity;
+
+		/* Save max capacity of the system */
+		if (capacity > max_capacity)
+			max_capacity = capacity;
+
+		cpu_capacity(cpu) = capacity;
+	}
+
+	/* If min and max capacities are equal we bypass the update of the
+	 * cpu_scale because all CPUs have the same capacity. Otherwise, we
+	 * compute a middle_capacity factor that will ensure that the capacity
+	 * of an 'average' CPU of the system will be as close as possible to
+	 * SCHED_POWER_SCALE, which is the default value, but with the
+	 * constraint explained near table_efficiency[].
+	 */
+	if (min_capacity == max_capacity)
+		return;
+	else if (4 * max_capacity < (3 * (max_capacity + min_capacity)))
+		middle_capacity = (min_capacity + max_capacity)
+				>> (SCHED_POWER_SHIFT+1);
+	else
+		middle_capacity = ((max_capacity / 3)
+				>> (SCHED_POWER_SHIFT-1)) + 1;
+}
+
+/*
+ * Look for a customed capacity of a CPU in the cpu_topo_data table during the
+ * boot. The update of all CPUs is in O(n^2) for heteregeneous system but the
+ * function returns directly for SMP system.
+ */
+static void update_cpu_power(unsigned int cpu)
+{
+	if (!cpu_capacity(cpu))
+		return;
+
+	set_power_scale(cpu, cpu_capacity(cpu) / middle_capacity);
+
+	pr_info("CPU%u: update cpu_power %lu\n",
+		cpu, arch_scale_freq_power(NULL, cpu));
+}
+
 /*
  * cpu topology table
  */
@@ -288,6 +429,7 @@  void store_cpu_topology(unsigned int cpuid)
 
 topology_populated:
 	update_siblings_masks(cpuid);
+	update_cpu_power(cpuid);
 }
 
 static void __init reset_cpu_topology(void)
@@ -308,6 +450,14 @@  static void __init reset_cpu_topology(void)
 	}
 }
 
+static void __init reset_cpu_power(void)
+{
+	unsigned int cpu;
+
+	for_each_possible_cpu(cpu)
+		set_power_scale(cpu, SCHED_POWER_SCALE);
+}
+
 void __init init_cpu_topology(void)
 {
 	reset_cpu_topology();
@@ -318,4 +468,7 @@  void __init init_cpu_topology(void)
 	 */
 	if (parse_dt_topology())
 		reset_cpu_topology();
+
+	reset_cpu_power();
+	parse_dt_cpu_power();
 }