Message ID | 20191219221932.15930-1-daniel.lezcano@linaro.org |
---|---|
State | New |
Headers | show |
Series | [1/2] DT: bindings: Add cooling cells for idle states | expand |
On Sat, Jan 11, 2020 at 11:32 AM Daniel Lezcano <daniel.lezcano@linaro.org> wrote: > > Hi Rob, > > > On Wed, 8 Jan 2020 at 15:03, Rob Herring <robh@kernel.org> wrote: > > > > On Thu, Dec 19, 2019 at 11:19:27PM +0100, Daniel Lezcano wrote: > > > Add DT documentation to add an idle state as a cooling device. The CPU > > > is actually the cooling device but the definition is already used by > > > frequency capping. As we need to make cpufreq capping and idle > > > injection to co-exist together on the system in order to mitigate at > > > different trip points, the CPU can not be used as the cooling device > > > for idle injection. The idle state can be seen as an hardware feature > > > and therefore as a component for the passive mitigation. > > > > > > Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> > > > --- > > > Documentation/devicetree/bindings/arm/idle-states.txt | 11 +++++++++++ > > > 1 file changed, 11 insertions(+) > > > > This is now a schema in my tree. Can you rebase on that and I'll pick up > > the binding change. > > Mmh, I'm now having some doubts about this binding because it will > restrict any improvement of the cooling device for the future. > > It looks like adding a node to the CPU for the cooling device is more > adequate. > eg: > CPU0: cpu@300 { > device_type = "cpu"; > compatible = "arm,cortex-a9"; > reg = <0x300>; > /* cpufreq controls */ > operating-points = <998400 0 > 800000 0 > 400000 0 > 200000 0>; > clocks = <&prcmu_clk PRCMU_ARMSS>; > clock-names = "cpu"; > clock-latency = <20000>; > #cooling-cells = <2>; > thermal-idle { > #cooling-cells = <2>; > }; > }; > > [ ... ] > > cooling-device = <&{/cpus/cpu@300/thermal-idle} > THERMAL_NO_LIMIT THERMAL_NO_LIMIT>; > > A quick test with different configurations combination shows it is much > more flexible and it is open for future changes. > > What do you think? Why do you need #cooling-cells in both cpu node and a child node? It's really only 1 device. Maybe you could add another cell to contain an idle state node if that helps? Rob
On Mon, Jan 13, 2020 at 11:52 AM Daniel Lezcano <daniel.lezcano@linaro.org> wrote: > > On 13/01/2020 17:16, Rob Herring wrote: > > On Sat, Jan 11, 2020 at 11:32 AM Daniel Lezcano > > <daniel.lezcano@linaro.org> wrote: > >> > >> Hi Rob, > >> > >> > >> On Wed, 8 Jan 2020 at 15:03, Rob Herring <robh@kernel.org> wrote: > >>> > >>> On Thu, Dec 19, 2019 at 11:19:27PM +0100, Daniel Lezcano wrote: > >>>> Add DT documentation to add an idle state as a cooling device. The CPU > >>>> is actually the cooling device but the definition is already used by > >>>> frequency capping. As we need to make cpufreq capping and idle > >>>> injection to co-exist together on the system in order to mitigate at > >>>> different trip points, the CPU can not be used as the cooling device > >>>> for idle injection. The idle state can be seen as an hardware feature > >>>> and therefore as a component for the passive mitigation. > >>>> > >>>> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> > >>>> --- > >>>> Documentation/devicetree/bindings/arm/idle-states.txt | 11 +++++++++++ > >>>> 1 file changed, 11 insertions(+) > >>> > >>> This is now a schema in my tree. Can you rebase on that and I'll pick up > >>> the binding change. > >> > >> Mmh, I'm now having some doubts about this binding because it will > >> restrict any improvement of the cooling device for the future. > >> > >> It looks like adding a node to the CPU for the cooling device is more > >> adequate. > >> eg: > >> CPU0: cpu@300 { > >> device_type = "cpu"; > >> compatible = "arm,cortex-a9"; > >> reg = <0x300>; > >> /* cpufreq controls */ > >> operating-points = <998400 0 > >> 800000 0 > >> 400000 0 > >> 200000 0>; > >> clocks = <&prcmu_clk PRCMU_ARMSS>; > >> clock-names = "cpu"; > >> clock-latency = <20000>; > >> #cooling-cells = <2>; > >> thermal-idle { > >> #cooling-cells = <2>; > >> }; > >> }; > >> > >> [ ... ] > >> > >> cooling-device = <&{/cpus/cpu@300/thermal-idle} > >> THERMAL_NO_LIMIT THERMAL_NO_LIMIT>; > >> > >> A quick test with different configurations combination shows it is much > >> more flexible and it is open for future changes. > >> > >> What do you think? > > > > Why do you need #cooling-cells in both cpu node and a child node? > > The cooling-cells in the CPU node is for the cpufreq cooling device and > the one in the thermal-idle is for the idle cooling device. The first > one is for backward compatibility. If no cpufreq cooling device exists > then the first cooling-cells is not needed. May be we can define > "thermal-dvfs" at the same time, so we do the change for both and > prevent mixing the old and new bindings? > > > It's really only 1 device. > > The main problem is how the thermal framework is designed. When we > register a cooling device we pass the node pointer and the core > framework checks it has a #cooling-cells. Then cooling-maps must have a > phandle to the node we registered before as a cooling device. This is > when the thermal-zone <-> cooling device association is done. > > With the cpufreq cooling device, the "CPU slot" is now used and we can't > point to it without ambiguity as we can have different cooling device > strategies for the same CPU at different temperatures. So why can't you have: cooling-device = <&cpu0 DVFS>; cooling-device = <&cpu0 IDLE>; (any additional cells omitted for simplicity) > > Is it acceptable the following? > > CPU0: cpu@300 { > [ ... ] > thermal-idle { > #cooling-cells = <2>; > }; > > thermal-dvfs { > #cooling-cells = <2>; > } > }; > > Or alternatively, can we define a passive-cooling node? > > thermal-cooling: passive0 { > #cooling-cells = <2>; > strategy="dvfs" | "idle" > cooling-device=<&CPU0> > }; > > cooling-device = <&passive0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>; > > > Maybe you could add another cell to contain an idle state node if that > helps? > > (Assuming you are referring to a phandle to an idle state) The idle > states are grouped per cluster because the CPUs belonging to the same > cluster have the same idle states characteristics. Because of that, the > phandle will point to the same node and it will be impossible to specify > a per cpu cooling device, only per cluster. What I meant was a phandle in the cooling cells, so #cooling-cells == 3: cooling-device = <&cpu0 0 0 &cpu_idle_state>, <&cpu1 0 0 &cpu_idle_state>; Phandle args being a phandle is a bit unusual, but certainly possible. Rob
diff --git a/Documentation/devicetree/bindings/arm/idle-states.txt b/Documentation/devicetree/bindings/arm/idle-states.txt index 771f5d20ae18..702a19b6b0b2 100644 --- a/Documentation/devicetree/bindings/arm/idle-states.txt +++ b/Documentation/devicetree/bindings/arm/idle-states.txt @@ -346,6 +346,17 @@ follows: idle-states node. Please refer to the entry-method bindings documentation for properties definitions. + When the idle state is used as a cooling device, the #cooling-cells + must be added in order to allow registering it. + + - #cooling-cells: + Usage: Optional + Type: unsigned + Size: one cell + Definition: A cooling device specific information. At least + must be two. Refer to the thermal DT bindings for + more details. + =========================================== 4 - Examples ===========================================
Add DT documentation to add an idle state as a cooling device. The CPU is actually the cooling device but the definition is already used by frequency capping. As we need to make cpufreq capping and idle injection to co-exist together on the system in order to mitigate at different trip points, the CPU can not be used as the cooling device for idle injection. The idle state can be seen as an hardware feature and therefore as a component for the passive mitigation. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> --- Documentation/devicetree/bindings/arm/idle-states.txt | 11 +++++++++++ 1 file changed, 11 insertions(+) -- 2.17.1