mbox series

[v2,0/9] Tegra234 Memory interconnect support

Message ID 20230220140559.28289-1-sumitg@nvidia.com
Headers show
Series Tegra234 Memory interconnect support | expand

Message

Sumit Gupta Feb. 20, 2023, 2:05 p.m. UTC
This patch series adds memory interconnect support for Tegra234 SoC.
It is used to dynamically scale DRAM Frequency as per the bandwidth
requests from different Memory Controller (MC) clients.
MC Clients use ICC Framework's icc_set_bw() api to dynamically request
for the DRAM bandwidth (BW). As per path, the request will be routed
from MC to the EMC driver. MC driver passes the request info like the
Client ID, type, and frequency request info to the BPMP-FW which will
set the final DRAM freq considering all exisiting requests.

MC and EMC are the ICC providers. Nodes in path for a request will be:
     Client[1-n] -> MC -> EMC -> EMEM/DRAM

The patch series also adds interconnect support in below client drivers:
1) CPUFREQ driver for scaling bandwidth with CPU frequency. For that,
   added per cluster OPP table which will be used in the CPUFREQ driver
   by requesting the minimum BW respective to the given CPU frequency in
   the OPP table of given cluster.
2) PCIE driver to request BW required for different modes.

---
v1[1] -> v2:
- moved BW setting to tegra234_mc_icc_set() from EMC driver.
- moved sw clients to the 'tegra_mc_clients' table.
- point 'node->data' to the entry within 'tegra_mc_clients'.
- removed 'struct tegra_icc_node' and get client info using 'node->data'.
- changed error handling in and around tegra_emc_interconnect_init().
- moved 'tegra-icc.h' from 'include/soc/tegra' to 'include/linux'.
- added interconnect support to PCIE driver in 'Patch 9'.
- merged 'Patch 9 & 10' from [1] to get num_channels and use.
- merged 'Patch 2 & 3' from [1] to add ISO and NISO clients.
- added 'Acked-by' of Krzysztof from 'Patch 05/10' of [1].
- Removed 'Patch 7' from [1] as that is merged now.

Sumit Gupta (9):
  firmware: tegra: add function to get BPMP data
  memory: tegra: add interconnect support for DRAM scaling in Tegra234
  memory: tegra: add mc clients for Tegra234
  memory: tegra: add software mc clients in Tegra234
  dt-bindings: tegra: add icc ids for dummy MC clients
  arm64: tegra: Add cpu OPP tables and interconnects property
  cpufreq: tegra194: add OPP support and set bandwidth
  memory: tegra: make cpu cluster bw request a multiple of mc channels
  PCI: tegra194: add interconnect support in Tegra234

 arch/arm64/boot/dts/nvidia/tegra234.dtsi   | 276 ++++++++++
 drivers/cpufreq/tegra194-cpufreq.c         | 152 +++++-
 drivers/firmware/tegra/bpmp.c              |  38 ++
 drivers/memory/tegra/mc.c                  |  24 +
 drivers/memory/tegra/mc.h                  |   1 +
 drivers/memory/tegra/tegra186-emc.c        | 117 ++++
 drivers/memory/tegra/tegra234.c            | 593 ++++++++++++++++++++-
 drivers/pci/controller/dwc/pcie-tegra194.c |  40 +-
 include/dt-bindings/memory/tegra234-mc.h   |   5 +
 include/linux/tegra-icc.h                  |  65 +++
 include/soc/tegra/bpmp.h                   |   5 +
 include/soc/tegra/mc.h                     |   6 +
 12 files changed, 1300 insertions(+), 22 deletions(-)
 create mode 100644 include/linux/tegra-icc.h

[1] https://lore.kernel.org/lkml/20221220160240.27494-1-sumitg@nvidia.com/T/

Comments

Viresh Kumar Feb. 22, 2023, 4:03 a.m. UTC | #1
On 20-02-23, 19:35, Sumit Gupta wrote:
> +static int tegra_cpufreq_set_bw(struct cpufreq_policy *policy, unsigned long freq_khz)
> +{
> +	struct dev_pm_opp *opp;
> +	struct device *dev;
> +	int ret;
> +
> +	dev = get_cpu_device(policy->cpu);
> +	if (!dev)
> +		return -ENODEV;
> +
> +	opp = dev_pm_opp_find_freq_exact(dev, freq_khz * KHZ, true);
> +	if (IS_ERR(opp))
> +		return PTR_ERR(opp);
> +
> +	ret = dev_pm_opp_set_opp(dev, opp);
> +	dev_pm_opp_put(opp);

What about dev_pm_opp_set_rate() instead ?

> +	return ret;
> +}
Sumit Gupta Feb. 23, 2023, 9:36 a.m. UTC | #2
On 22/02/23 09:33, Viresh Kumar wrote:
> External email: Use caution opening links or attachments
> 
> 
> On 20-02-23, 19:35, Sumit Gupta wrote:
>> +static int tegra_cpufreq_set_bw(struct cpufreq_policy *policy, unsigned long freq_khz)
>> +{
>> +     struct dev_pm_opp *opp;
>> +     struct device *dev;
>> +     int ret;
>> +
>> +     dev = get_cpu_device(policy->cpu);
>> +     if (!dev)
>> +             return -ENODEV;
>> +
>> +     opp = dev_pm_opp_find_freq_exact(dev, freq_khz * KHZ, true);
>> +     if (IS_ERR(opp))
>> +             return PTR_ERR(opp);
>> +
>> +     ret = dev_pm_opp_set_opp(dev, opp);
>> +     dev_pm_opp_put(opp);
> 
> What about dev_pm_opp_set_rate() instead ?
> 
>> +     return ret;
>> +}
> 
> --
> viresh

Tried using it and got below crash. It seems to be coming because we 
don't have clocks property within CPU node for SoC's having BPMP-FW.

  Unable to handle kernel NULL pointer dereference at virtual address 
000000000000002e
  ....
  Call trace:
   clk_round_rate+0x38/0xd8
   dev_pm_opp_set_rate+0xe4/0x1a8
   tegra194_cpufreq_set_target+0x74/0x88
   __cpufreq_driver_target+0x154/0x250
   cpufreq_online+0x7b4/0x9ac

Thanks,
Sumit
Viresh Kumar Feb. 28, 2023, 1:18 a.m. UTC | #3
On 27-02-23, 13:44, Thierry Reding wrote:
> On Thu, Feb 23, 2023 at 03:06:26PM +0530, Sumit Gupta wrote:
> > On 22/02/23 09:33, Viresh Kumar wrote:
> > Tried using it and got below crash. It seems to be coming because we don't
> > have clocks property within CPU node for SoC's having BPMP-FW.
> > 
> >  Unable to handle kernel NULL pointer dereference at virtual address
> > 000000000000002e
> >  ....
> >  Call trace:
> >   clk_round_rate+0x38/0xd8
> >   dev_pm_opp_set_rate+0xe4/0x1a8
> >   tegra194_cpufreq_set_target+0x74/0x88
> >   __cpufreq_driver_target+0x154/0x250
> >   cpufreq_online+0x7b4/0x9ac
> 
> Can you try to find out what exactly is happening here? The clock
> framework should be able to deal with NULL clock pointers just fine.
> Although, looking at the OPP table code, it seems like we don't use
> clk_get_optional(), so opp_table->clk may end up being a pointer-
> encoded error. Perhaps we need something like this:
> 
> --- >8 ---
> diff --git a/drivers/opp/core.c b/drivers/opp/core.c
> index e87567dbe99f..d7baeb6ac697 100644
> --- a/drivers/opp/core.c
> +++ b/drivers/opp/core.c
> @@ -1397,6 +1397,7 @@ static struct opp_table *_update_opp_table_clk(struct device *dev,
>  		 * frequency in opp->rates and also parse the entries in DT.
>  		 */
>  		opp_table->clk_count = 1;
> +		opp_table->clk = NULL;
>  
>  		dev_dbg(dev, "%s: Couldn't find clock: %d\n", __func__, ret);
>  		return opp_table;

I didn't reply earlier as I had nothing more to say and Sumit's
initial approach was correct. Maybe I should have I have clarified
this then.

The OPP core supports dev_pm_opp_set_rate() only for devices that can
set the rate, for everything else dev_pm_opp_set_opp() is the right
choice. I suggested dev_pm_opp_set_rate() earlier as I thought rate is
supported here.
Krzysztof Kozlowski March 6, 2023, 3:05 p.m. UTC | #4
On 20/02/2023 15:05, Sumit Gupta wrote:
> This patch series adds memory interconnect support for Tegra234 SoC.
> It is used to dynamically scale DRAM Frequency as per the bandwidth
> requests from different Memory Controller (MC) clients.
> MC Clients use ICC Framework's icc_set_bw() api to dynamically request
> for the DRAM bandwidth (BW). As per path, the request will be routed
> from MC to the EMC driver. MC driver passes the request info like the
> Client ID, type, and frequency request info to the BPMP-FW which will
> set the final DRAM freq considering all exisiting requests.
> 
> MC and EMC are the ICC providers. Nodes in path for a request will be:
>      Client[1-n] -> MC -> EMC -> EMEM/DRAM
> 
> The patch series also adds interconnect support in below client drivers:
> 1) CPUFREQ driver for scaling bandwidth with CPU frequency. For that,
>    added per cluster OPP table which will be used in the CPUFREQ driver
>    by requesting the minimum BW respective to the given CPU frequency in
>    the OPP table of given cluster.
> 2) PCIE driver to request BW required for different modes.

No dependencies or ordering written, so I am free to take memory
controller bits, I assume.

Best regards,
Krzysztof
Krzysztof Kozlowski March 6, 2023, 3:07 p.m. UTC | #5
On 06/03/2023 16:05, Krzysztof Kozlowski wrote:
> On 20/02/2023 15:05, Sumit Gupta wrote:
>> This patch series adds memory interconnect support for Tegra234 SoC.
>> It is used to dynamically scale DRAM Frequency as per the bandwidth
>> requests from different Memory Controller (MC) clients.
>> MC Clients use ICC Framework's icc_set_bw() api to dynamically request
>> for the DRAM bandwidth (BW). As per path, the request will be routed
>> from MC to the EMC driver. MC driver passes the request info like the
>> Client ID, type, and frequency request info to the BPMP-FW which will
>> set the final DRAM freq considering all exisiting requests.
>>
>> MC and EMC are the ICC providers. Nodes in path for a request will be:
>>      Client[1-n] -> MC -> EMC -> EMEM/DRAM
>>
>> The patch series also adds interconnect support in below client drivers:
>> 1) CPUFREQ driver for scaling bandwidth with CPU frequency. For that,
>>    added per cluster OPP table which will be used in the CPUFREQ driver
>>    by requesting the minimum BW respective to the given CPU frequency in
>>    the OPP table of given cluster.
>> 2) PCIE driver to request BW required for different modes.
> 
> No dependencies or ordering written, so I am free to take memory
> controller bits, I assume.

And not.. NAK, since you decided to ignore my comments. Really, we do
not have time for such useless ping pong.

Best regards,
Krzysztof
Sumit Gupta March 6, 2023, 8:19 p.m. UTC | #6
On 06/03/23 20:35, Krzysztof Kozlowski wrote:
> External email: Use caution opening links or attachments
> 
> 
> On 20/02/2023 15:05, Sumit Gupta wrote:
>> This patch series adds memory interconnect support for Tegra234 SoC.
>> It is used to dynamically scale DRAM Frequency as per the bandwidth
>> requests from different Memory Controller (MC) clients.
>> MC Clients use ICC Framework's icc_set_bw() api to dynamically request
>> for the DRAM bandwidth (BW). As per path, the request will be routed
>> from MC to the EMC driver. MC driver passes the request info like the
>> Client ID, type, and frequency request info to the BPMP-FW which will
>> set the final DRAM freq considering all exisiting requests.
>>
>> MC and EMC are the ICC providers. Nodes in path for a request will be:
>>       Client[1-n] -> MC -> EMC -> EMEM/DRAM
>>
>> The patch series also adds interconnect support in below client drivers:
>> 1) CPUFREQ driver for scaling bandwidth with CPU frequency. For that,
>>     added per cluster OPP table which will be used in the CPUFREQ driver
>>     by requesting the minimum BW respective to the given CPU frequency in
>>     the OPP table of given cluster.
>> 2) PCIE driver to request BW required for different modes.
> 
> No dependencies or ordering written, so I am free to take memory
> controller bits, I assume.
> 
> Best regards,
> Krzysztof
> 

Apologies for not mentioning the order in cover letter. The patches are 
divided into below groups.
  Patch [9]: Memory Interconnect support in PCI (MC client)
  Patch [4-8]: Memory Interconnect support in CPUFREQ (MC client)
  Patch [1-3]: Memory Interconnect base support

Both the Memory Controller (MC) client patches are dependent on the 
'Memory Interconnect base support patch [1-3]'.

Thanks,
Sumit
Sumit Gupta March 6, 2023, 8:43 p.m. UTC | #7
On 06/03/23 20:37, Krzysztof Kozlowski wrote:
> External email: Use caution opening links or attachments
> 
> 
> On 06/03/2023 16:05, Krzysztof Kozlowski wrote:
>> On 20/02/2023 15:05, Sumit Gupta wrote:
>>> This patch series adds memory interconnect support for Tegra234 SoC.
>>> It is used to dynamically scale DRAM Frequency as per the bandwidth
>>> requests from different Memory Controller (MC) clients.
>>> MC Clients use ICC Framework's icc_set_bw() api to dynamically request
>>> for the DRAM bandwidth (BW). As per path, the request will be routed
>>> from MC to the EMC driver. MC driver passes the request info like the
>>> Client ID, type, and frequency request info to the BPMP-FW which will
>>> set the final DRAM freq considering all exisiting requests.
>>>
>>> MC and EMC are the ICC providers. Nodes in path for a request will be:
>>>       Client[1-n] -> MC -> EMC -> EMEM/DRAM
>>>
>>> The patch series also adds interconnect support in below client drivers:
>>> 1) CPUFREQ driver for scaling bandwidth with CPU frequency. For that,
>>>     added per cluster OPP table which will be used in the CPUFREQ driver
>>>     by requesting the minimum BW respective to the given CPU frequency in
>>>     the OPP table of given cluster.
>>> 2) PCIE driver to request BW required for different modes.
>>
>> No dependencies or ordering written, so I am free to take memory
>> controller bits, I assume.
> 
> And not.. NAK, since you decided to ignore my comments. Really, we do
> not have time for such useless ping pong.
> 
> Best regards,
> Krzysztof
> 

Hi Krzysztof,

I tried to address the comments given during review of v1 in v2.
I am sorry if in case I missed any suggestion. Please let me know so I 
can incorporate that.

Thanks,
Sumit
Krzysztof Kozlowski March 7, 2023, 9:21 a.m. UTC | #8
On 06/03/2023 21:43, Sumit Gupta wrote:
> 
> 
> On 06/03/23 20:37, Krzysztof Kozlowski wrote:
>> External email: Use caution opening links or attachments
>>
>>
>> On 06/03/2023 16:05, Krzysztof Kozlowski wrote:
>>> On 20/02/2023 15:05, Sumit Gupta wrote:
>>>> This patch series adds memory interconnect support for Tegra234 SoC.
>>>> It is used to dynamically scale DRAM Frequency as per the bandwidth
>>>> requests from different Memory Controller (MC) clients.
>>>> MC Clients use ICC Framework's icc_set_bw() api to dynamically request
>>>> for the DRAM bandwidth (BW). As per path, the request will be routed
>>>> from MC to the EMC driver. MC driver passes the request info like the
>>>> Client ID, type, and frequency request info to the BPMP-FW which will
>>>> set the final DRAM freq considering all exisiting requests.
>>>>
>>>> MC and EMC are the ICC providers. Nodes in path for a request will be:
>>>>       Client[1-n] -> MC -> EMC -> EMEM/DRAM
>>>>
>>>> The patch series also adds interconnect support in below client drivers:
>>>> 1) CPUFREQ driver for scaling bandwidth with CPU frequency. For that,
>>>>     added per cluster OPP table which will be used in the CPUFREQ driver
>>>>     by requesting the minimum BW respective to the given CPU frequency in
>>>>     the OPP table of given cluster.
>>>> 2) PCIE driver to request BW required for different modes.
>>>
>>> No dependencies or ordering written, so I am free to take memory
>>> controller bits, I assume.
>>
>> And not.. NAK, since you decided to ignore my comments. Really, we do
>> not have time for such useless ping pong.
>>
>> Best regards,
>> Krzysztof
>>
> 
> Hi Krzysztof,
> 
> I tried to address the comments given during review of v1 in v2.
> I am sorry if in case I missed any suggestion. Please let me know so I 
> can incorporate that.
> 

I never got any feedback and my first glance suggested nothing changed.
Let me check again.

Best regards,
Krzysztof