diff mbox series

[v9,2/8] dt-bindings: Introduce interconnect binding

Message ID 20180831140151.13972-3-georgi.djakov@linaro.org
State Superseded
Headers show
Series None | expand

Commit Message

Georgi Djakov Aug. 31, 2018, 2:01 p.m. UTC
This binding is intended to represent the relations between the interconnect
controllers (providers) and consumer device nodes. It will allow creating links
between consumers and interconnect paths (exposed by interconnect providers).

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>

---
 .../bindings/interconnect/interconnect.txt    | 60 +++++++++++++++++++
 1 file changed, 60 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt

Comments

Jordan Crouse Sept. 26, 2018, 2:34 p.m. UTC | #1
On Tue, Sep 25, 2018 at 01:02:15PM -0500, Rob Herring wrote:
> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:

> > This binding is intended to represent the relations between the interconnect

> > controllers (providers) and consumer device nodes. It will allow creating links

> > between consumers and interconnect paths (exposed by interconnect providers).

> 

> As I mentioned in person, I want to see other SoC families using this 

> before accepting. They don't have to be ready for upstream, but WIP 

> patches or even just a "yes, this works for us and we're going to use 

> this binding on X".

> 

> Also, I think the QCom GPU use of this should be fully sorted out. Or 

> more generically how this fits into OPP binding which seems to be never 

> ending extended...


This is a discussion I wouldn't mind having now.  To jog memories, this is what
I posted a few weeks ago:

https://patchwork.freedesktop.org/patch/246117/

This seems like the easiest way to me to tie the frequency and the bandwidth
quota together for GPU devfreq scaling but I'm not married to the format and
I'll happily go a few rounds on the bikeshed if we can get something we can
be happy with.

Jordan

> > Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>

> > ---

> >  .../bindings/interconnect/interconnect.txt    | 60 +++++++++++++++++++

> >  1 file changed, 60 insertions(+)

> >  create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt

> > 

> > diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt

> > new file mode 100644

> > index 000000000000..5cb7d3c8d44d

> > --- /dev/null

> > +++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt

> > @@ -0,0 +1,60 @@

> > +Interconnect Provider Device Tree Bindings

> > +=========================================

> > +

> > +The purpose of this document is to define a common set of generic interconnect

> > +providers/consumers properties.

> > +

> > +

> > += interconnect providers =

> > +

> > +The interconnect provider binding is intended to represent the interconnect

> > +controllers in the system. Each provider registers a set of interconnect

> > +nodes, which expose the interconnect related capabilities of the interconnect

> > +to consumer drivers. These capabilities can be throughput, latency, priority

> > +etc. The consumer drivers set constraints on interconnect path (or endpoints)

> > +depending on the use case. Interconnect providers can also be interconnect

> > +consumers, such as in the case where two network-on-chip fabrics interface

> > +directly

> 

> missing '.'

> 

> > +

> > +Required properties:

> > +- compatible : contains the interconnect provider compatible string

> > +- #interconnect-cells : number of cells in a interconnect specifier needed to

> > +			encode the interconnect node id

> > +

> > +Example:

> > +

> > +		snoc: snoc@580000 {

> > +			compatible = "qcom,msm8916-snoc";

> > +			#interconnect-cells = <1>;

> > +			reg = <0x580000 0x14000>;

> > +			clock-names = "bus_clk", "bus_a_clk";

> > +			clocks = <&rpmcc RPM_SMD_SNOC_CLK>,

> > +				 <&rpmcc RPM_SMD_SNOC_A_CLK>;

> > +		};

> > +

> > +

> > += interconnect consumers =

> > +

> > +The interconnect consumers are device nodes which dynamically express their

> > +bandwidth requirements along interconnect paths they are connected to. There

> > +can be multiple interconnect providers on a SoC and the consumer may consume

> > +multiple paths from different providers depending on use case and the

> > +components it has to interact with.

> > +

> > +Required properties:

> > +interconnects : Pairs of phandles and interconnect provider specifier to denote

> > +	        the edge source and destination ports of the interconnect path.

> > +

> > +Optional properties:

> > +interconnect-names : List of interconnect path name strings sorted in the same

> > +		     order as the interconnects property. Consumers drivers will use

> > +		     interconnect-names to match interconnect paths with interconnect

> > +		     specifiers.

> 

> specifier pairs.

> 

> > +

> > +Example:

> > +

> > +	sdhci@7864000 {

> > +		...

> > +		interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;

> > +		interconnect-names = "ddr";

> > +	};


-- 
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
Georgi Djakov Sept. 26, 2018, 2:42 p.m. UTC | #2
Hi Rob,

Thanks for the comments!

On 09/25/2018 09:02 PM, Rob Herring wrote:
> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:

>> This binding is intended to represent the relations between the interconnect

>> controllers (providers) and consumer device nodes. It will allow creating links

>> between consumers and interconnect paths (exposed by interconnect providers).

> 

> As I mentioned in person, I want to see other SoC families using this 

> before accepting. They don't have to be ready for upstream, but WIP 

> patches or even just a "yes, this works for us and we're going to use 

> this binding on X".


Other than the 3 Qualcomm SoCs (msm8916, msm8996, sdm845) that are
currently using this binding, there is ongoing work from at least two
other vendors that would be using this same binding. I will check on
what is their progress so far.

> Also, I think the QCom GPU use of this should be fully sorted out. Or 

> more generically how this fits into OPP binding which seems to be never 

> ending extended...


I see this as a further step. It could be OPP binding which include
bandwidth values or some separate DT property. Jordan has already
proposed something, do you have any initial comments on that?

BR,
Georgi

>> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>

>> ---

>>  .../bindings/interconnect/interconnect.txt    | 60 +++++++++++++++++++

>>  1 file changed, 60 insertions(+)

>>  create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt

>>

>> diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt

>> new file mode 100644

>> index 000000000000..5cb7d3c8d44d

>> --- /dev/null

>> +++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt

>> @@ -0,0 +1,60 @@

>> +Interconnect Provider Device Tree Bindings

>> +=========================================

>> +

>> +The purpose of this document is to define a common set of generic interconnect

>> +providers/consumers properties.

>> +

>> +

>> += interconnect providers =

>> +

>> +The interconnect provider binding is intended to represent the interconnect

>> +controllers in the system. Each provider registers a set of interconnect

>> +nodes, which expose the interconnect related capabilities of the interconnect

>> +to consumer drivers. These capabilities can be throughput, latency, priority

>> +etc. The consumer drivers set constraints on interconnect path (or endpoints)

>> +depending on the use case. Interconnect providers can also be interconnect

>> +consumers, such as in the case where two network-on-chip fabrics interface

>> +directly

> 

> missing '.'

> 

>> +

>> +Required properties:

>> +- compatible : contains the interconnect provider compatible string

>> +- #interconnect-cells : number of cells in a interconnect specifier needed to

>> +			encode the interconnect node id

>> +

>> +Example:

>> +

>> +		snoc: snoc@580000 {

>> +			compatible = "qcom,msm8916-snoc";

>> +			#interconnect-cells = <1>;

>> +			reg = <0x580000 0x14000>;

>> +			clock-names = "bus_clk", "bus_a_clk";

>> +			clocks = <&rpmcc RPM_SMD_SNOC_CLK>,

>> +				 <&rpmcc RPM_SMD_SNOC_A_CLK>;

>> +		};

>> +

>> +

>> += interconnect consumers =

>> +

>> +The interconnect consumers are device nodes which dynamically express their

>> +bandwidth requirements along interconnect paths they are connected to. There

>> +can be multiple interconnect providers on a SoC and the consumer may consume

>> +multiple paths from different providers depending on use case and the

>> +components it has to interact with.

>> +

>> +Required properties:

>> +interconnects : Pairs of phandles and interconnect provider specifier to denote

>> +	        the edge source and destination ports of the interconnect path.

>> +

>> +Optional properties:

>> +interconnect-names : List of interconnect path name strings sorted in the same

>> +		     order as the interconnects property. Consumers drivers will use

>> +		     interconnect-names to match interconnect paths with interconnect

>> +		     specifiers.

> 

> specifier pairs.

> 

>> +

>> +Example:

>> +

>> +	sdhci@7864000 {

>> +		...

>> +		interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;

>> +		interconnect-names = "ddr";

>> +	};
Georgi Djakov Sept. 26, 2018, 3:03 p.m. UTC | #3
Hi Sudeep,

On 09/26/2018 05:48 PM, Sudeep Holla wrote:
> On Wed, Sep 26, 2018 at 05:42:15PM +0300, Georgi Djakov wrote:

>> Hi Rob,

>>

>> Thanks for the comments!

>>

>> On 09/25/2018 09:02 PM, Rob Herring wrote:

>>> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:

>>>> This binding is intended to represent the relations between the interconnect

>>>> controllers (providers) and consumer device nodes. It will allow creating links

>>>> between consumers and interconnect paths (exposed by interconnect providers).

>>>

>>> As I mentioned in person, I want to see other SoC families using this 

>>> before accepting. They don't have to be ready for upstream, but WIP 

>>> patches or even just a "yes, this works for us and we're going to use 

>>> this binding on X".

>>

>> Other than the 3 Qualcomm SoCs (msm8916, msm8996, sdm845) that are

>> currently using this binding, there is ongoing work from at least two

>> other vendors that would be using this same binding. I will check on

>> what is their progress so far.

>>

>>> Also, I think the QCom GPU use of this should be fully sorted out. Or 

>>> more generically how this fits into OPP binding which seems to be never 

>>> ending extended...

>>

>> I see this as a further step. It could be OPP binding which include

>> bandwidth values or some separate DT property. Jordan has already

>> proposed something, do you have any initial comments on that?

> 

> I am curious as how this fits into new systems which have firmware driven

> CPUFreq and other DVFS. I would like to avoid using this in such systems

> and leave it upto the firmware to scale the bus/interconnect based on the

> other components that are connected to it and active.


This can be used with firmware driven systems too. In this case the
interconnect platform driver will not manage the interconnects directly,
but will collect data and provide hints to the firmware. Then the
firmware can take into account these hints and do the scaling.

Thanks,
Georgi
Sudeep Holla Oct. 2, 2018, 11:17 a.m. UTC | #4
On Mon, Oct 01, 2018 at 04:49:32PM -0700, Saravana Kannan wrote:
> On 09/26/2018 07:48 AM, Sudeep Holla wrote:

> > On Wed, Sep 26, 2018 at 05:42:15PM +0300, Georgi Djakov wrote:

> > > Hi Rob,

> > > 

> > > Thanks for the comments!

> > > 

> > > On 09/25/2018 09:02 PM, Rob Herring wrote:

> > > > On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:

> > > > > This binding is intended to represent the relations between the interconnect

> > > > > controllers (providers) and consumer device nodes. It will allow creating links

> > > > > between consumers and interconnect paths (exposed by interconnect providers).

> > > > As I mentioned in person, I want to see other SoC families using this

> > > > before accepting. They don't have to be ready for upstream, but WIP

> > > > patches or even just a "yes, this works for us and we're going to use

> > > > this binding on X".

> > > Other than the 3 Qualcomm SoCs (msm8916, msm8996, sdm845) that are

> > > currently using this binding, there is ongoing work from at least two

> > > other vendors that would be using this same binding. I will check on

> > > what is their progress so far.

> > > 

> > > > Also, I think the QCom GPU use of this should be fully sorted out. Or

> > > > more generically how this fits into OPP binding which seems to be never

> > > > ending extended...

> > > I see this as a further step. It could be OPP binding which include

> > > bandwidth values or some separate DT property. Jordan has already

> > > proposed something, do you have any initial comments on that?

> > I am curious as how this fits into new systems which have firmware driven

> > CPUFreq and other DVFS. I would like to avoid using this in such systems

> > and leave it upto the firmware to scale the bus/interconnect based on the

> > other components that are connected to it and active.

> > 

> 

> You've made the same point multiple times across different patch sets. Not

> all FW can do arbitrary functions. A lot of them are very limited in their

> capabilities. So, as much as you and I would like to let the FW do the work,

> it's not always possible. So, in those cases, we do need to have support for

> the kernel scaling the interconnects correctly. Hopefully this clears up

> your questions about FW capabilities.


Yes, I do understand I have made the same point multiple time and it's
intentional. We need to get the fragmented f/w support story fixed.
Different ARM vendors are doing different things in f/w and ARM sees the
same fragmentation story as before. We have come up with new specification
and my annoying multiple emails are just to constantly remind the same.

I do understand we have existing implementations to consider, but fixing
the functionality in arbitrary way is not a good design and it better
to get them fixed for future.

--
Regards,
Sudeep
Sudeep Holla Oct. 3, 2018, 9:33 a.m. UTC | #5
On Tue, Oct 02, 2018 at 11:56:56AM -0700, Saravana Kannan wrote:
> 

> On 10/02/2018 04:17 AM, Sudeep Holla wrote:


[...]

> > Yes, I do understand I have made the same point multiple time and it's

> > intentional. We need to get the fragmented f/w support story fixed.

> > Different ARM vendors are doing different things in f/w and ARM sees the

> > same fragmentation story as before. We have come up with new specification

> > and my annoying multiple emails are just to constantly remind the same.

> > 

> > I do understand we have existing implementations to consider, but fixing

> > the functionality in arbitrary way is not a good design and it better

> > to get them fixed for future.

> 

> I believe the fragmentation you are referring to isĀ  in the

> interface/communication protocol. I see the benefit of standardizing that as

> long as the standard actually turns out to be good. But that's completely

> separate from what the FW can/can't do. Asking to standardize what the FW

> can/can't do doesn't seem realistic as each chip vendor will have different

> priorities - power, performance, cost, chip area, etc. It's the conflation

> of these separate topics that doesn't help IMHO.


I agree on interface/communication protocol fragmentation and firmware
can implement whatever the vendor wish. What I was also referring was
the mix-n-match approach which should be avoided.

e.g. Device A and B's PM is managed completely by firmware using OSPM hints
Suppose Device X's PM is dependent on Device A and B, in which case it's
simpler and cleaner to leave Device X PM to firmware. Reading the state
of A and B and using that as hint for X is just overhead which firmware
can manage better. That was my main concern here: A=CPU and B=some other
device and X is inter-connect to which A and B are connected.

If CPU OPPs are obtained from f/w and this inter-connect from DT, mapping
then is a mess and that's what I was concerned. I am sorry if that's not
the scenario here, I may have mistaken then.

--
Regards,
Sudeep
Georgi Djakov Nov. 27, 2018, 6:05 p.m. UTC | #6
Hi Rob,

On 9/26/18 17:42, Georgi Djakov wrote:
> Hi Rob,

> 

> Thanks for the comments!

> 

> On 09/25/2018 09:02 PM, Rob Herring wrote:

>> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:

>>> This binding is intended to represent the relations between the interconnect

>>> controllers (providers) and consumer device nodes. It will allow creating links

>>> between consumers and interconnect paths (exposed by interconnect providers).

>>

>> As I mentioned in person, I want to see other SoC families using this

>> before accepting. They don't have to be ready for upstream, but WIP

>> patches or even just a "yes, this works for us and we're going to use

>> this binding on X".


Patches for the iMX7ULP platform are already available [1]. Thanks 
Alexandre Bailon!

The interconnect API seems to be also a good fit for Nvidia SoCs. There 
is an ongoing discussion about implementing an interconnect provider 
driver for Tegra [2]. Thanks Thierry and Krishna!

In addition of the above, i also checked privately with a few other SoC 
maintainers and made them aware of these patches. Some are not ready for 
upstream yet, but the feedback was positive and i expect more SoCs to 
make use of this in the future.

BR,
Georgi

[1] https://lkml.org/lkml/2018/11/17/368
[2] https://marc.info/?t=154056181900001&r=1&w=2


> Other than the 3 Qualcomm SoCs (msm8916, msm8996, sdm845) that are

> currently using this binding, there is ongoing work from at least two

> other vendors that would be using this same binding. I will check on

> what is their progress so far.

> 

>> Also, I think the QCom GPU use of this should be fully sorted out. Or

>> more generically how this fits into OPP binding which seems to be never

>> ending extended...

> 

> I see this as a further step. It could be OPP binding which include

> bandwidth values or some separate DT property. Jordan has already

> proposed something, do you have any initial comments on that?

> 

> BR,

> Georgi

>
diff mbox series

Patch

diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt
new file mode 100644
index 000000000000..5cb7d3c8d44d
--- /dev/null
+++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
@@ -0,0 +1,60 @@ 
+Interconnect Provider Device Tree Bindings
+=========================================
+
+The purpose of this document is to define a common set of generic interconnect
+providers/consumers properties.
+
+
+= interconnect providers =
+
+The interconnect provider binding is intended to represent the interconnect
+controllers in the system. Each provider registers a set of interconnect
+nodes, which expose the interconnect related capabilities of the interconnect
+to consumer drivers. These capabilities can be throughput, latency, priority
+etc. The consumer drivers set constraints on interconnect path (or endpoints)
+depending on the use case. Interconnect providers can also be interconnect
+consumers, such as in the case where two network-on-chip fabrics interface
+directly
+
+Required properties:
+- compatible : contains the interconnect provider compatible string
+- #interconnect-cells : number of cells in a interconnect specifier needed to
+			encode the interconnect node id
+
+Example:
+
+		snoc: snoc@580000 {
+			compatible = "qcom,msm8916-snoc";
+			#interconnect-cells = <1>;
+			reg = <0x580000 0x14000>;
+			clock-names = "bus_clk", "bus_a_clk";
+			clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
+				 <&rpmcc RPM_SMD_SNOC_A_CLK>;
+		};
+
+
+= interconnect consumers =
+
+The interconnect consumers are device nodes which dynamically express their
+bandwidth requirements along interconnect paths they are connected to. There
+can be multiple interconnect providers on a SoC and the consumer may consume
+multiple paths from different providers depending on use case and the
+components it has to interact with.
+
+Required properties:
+interconnects : Pairs of phandles and interconnect provider specifier to denote
+	        the edge source and destination ports of the interconnect path.
+
+Optional properties:
+interconnect-names : List of interconnect path name strings sorted in the same
+		     order as the interconnects property. Consumers drivers will use
+		     interconnect-names to match interconnect paths with interconnect
+		     specifiers.
+
+Example:
+
+	sdhci@7864000 {
+		...
+		interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;
+		interconnect-names = "ddr";
+	};