Message ID | 20180731161340.13000-3-georgi.djakov@linaro.org |
---|---|
State | New |
Headers | show |
Series | None | expand |
Hi Rob, On 08/03/2018 12:02 AM, Rob Herring wrote: > On Tue, Jul 31, 2018 at 10:13 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: >> >> This binding is intended to represent the interconnect hardware present >> in some of the modern SoCs. Currently it consists only of a binding for >> the interconnect hardware devices (provider). > > If you want the bindings reviewed, then you need to send them to the > DT list. CC'ing me is pointless, I get CC'ed too many things to read. Ops, ok! > The consumer and producer binding should be a single patch. One is not > useful without the other. The reason for splitting them is that they can be reviewed separately. Also we can rely on platform data instead of using DT and the consumer binding. However will do as you suggest. > There is also a patch series from Maxime Ripard that's addressing the > same general area. See "dt-bindings: Add a dma-parent property". We > don't need multiple ways to address describing the device to memory > paths, so you all had better work out a common solution. Looks like this fits exactly into the interconnect API concept. I see MBUS as interconnect provider and display/camera as consumers, that report their bandwidth needs. I am also planning to add support for priority. Thanks, Georgi
Hi Georgi, On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote: > > There is also a patch series from Maxime Ripard that's addressing the > > same general area. See "dt-bindings: Add a dma-parent property". We > > don't need multiple ways to address describing the device to memory > > paths, so you all had better work out a common solution. > > Looks like this fits exactly into the interconnect API concept. I see > MBUS as interconnect provider and display/camera as consumers, that > report their bandwidth needs. I am also planning to add support for > priority. Thanks for working on this. After looking at your serie, the one thing I'm a bit uncertain about (and the most important one to us) is how we would be able to tell through which interconnect the DMA are done. This is important to us since our topology is actually quite simple as you've seen, but the RAM is not mapped on that bus and on the CPU's, so we need to apply an offset to each buffer being DMA'd. Maxime -- Maxime Ripard, Bootlin (formerly Free Electrons) Embedded Linux and Kernel engineering https://bootlin.com
diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt new file mode 100644 index 000000000000..6e2b2971b094 --- /dev/null +++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt @@ -0,0 +1,33 @@ +Interconnect Provider Device Tree Bindings +========================================= + +The purpose of this document is to define a common set of generic interconnect +providers/consumers properties. + + += interconnect providers = + +The interconnect provider binding is intended to represent the interconnect +controllers in the system. Each provider registers a set of interconnect +nodes, which expose the interconnect related capabilities of the interconnect +to consumer drivers. These capabilities can be throughput, latency, priority +etc. The consumer drivers set constraints on interconnect path (or endpoints) +depending on the use case. Interconnect providers can also be interconnect +consumers, such as in the case where two network-on-chip fabrics interface +directly + +Required properties: +- compatible : contains the interconnect provider compatible string +- #interconnect-cells : number of cells in a interconnect specifier needed to + encode the interconnect node id + +Example: + + snoc: snoc@580000 { + compatible = "qcom,msm8916-snoc"; + #interconnect-cells = <1>; + reg = <0x580000 0x14000>; + clock-names = "bus_clk", "bus_a_clk"; + clocks = <&rpmcc RPM_SMD_SNOC_CLK>, + <&rpmcc RPM_SMD_SNOC_A_CLK>; + };
This binding is intended to represent the interconnect hardware present in some of the modern SoCs. Currently it consists only of a binding for the interconnect hardware devices (provider). Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org> --- .../bindings/interconnect/interconnect.txt | 33 +++++++++++++++++++ 1 file changed, 33 insertions(+) create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt