Message ID | 20230220140559.28289-1-sumitg@nvidia.com |
---|---|
Headers | show |
Series | Tegra234 Memory interconnect support | expand |
On 20-02-23, 19:35, Sumit Gupta wrote: > +static int tegra_cpufreq_set_bw(struct cpufreq_policy *policy, unsigned long freq_khz) > +{ > + struct dev_pm_opp *opp; > + struct device *dev; > + int ret; > + > + dev = get_cpu_device(policy->cpu); > + if (!dev) > + return -ENODEV; > + > + opp = dev_pm_opp_find_freq_exact(dev, freq_khz * KHZ, true); > + if (IS_ERR(opp)) > + return PTR_ERR(opp); > + > + ret = dev_pm_opp_set_opp(dev, opp); > + dev_pm_opp_put(opp); What about dev_pm_opp_set_rate() instead ? > + return ret; > +}
On 22/02/23 09:33, Viresh Kumar wrote: > External email: Use caution opening links or attachments > > > On 20-02-23, 19:35, Sumit Gupta wrote: >> +static int tegra_cpufreq_set_bw(struct cpufreq_policy *policy, unsigned long freq_khz) >> +{ >> + struct dev_pm_opp *opp; >> + struct device *dev; >> + int ret; >> + >> + dev = get_cpu_device(policy->cpu); >> + if (!dev) >> + return -ENODEV; >> + >> + opp = dev_pm_opp_find_freq_exact(dev, freq_khz * KHZ, true); >> + if (IS_ERR(opp)) >> + return PTR_ERR(opp); >> + >> + ret = dev_pm_opp_set_opp(dev, opp); >> + dev_pm_opp_put(opp); > > What about dev_pm_opp_set_rate() instead ? > >> + return ret; >> +} > > -- > viresh Tried using it and got below crash. It seems to be coming because we don't have clocks property within CPU node for SoC's having BPMP-FW. Unable to handle kernel NULL pointer dereference at virtual address 000000000000002e .... Call trace: clk_round_rate+0x38/0xd8 dev_pm_opp_set_rate+0xe4/0x1a8 tegra194_cpufreq_set_target+0x74/0x88 __cpufreq_driver_target+0x154/0x250 cpufreq_online+0x7b4/0x9ac Thanks, Sumit
On 27-02-23, 13:44, Thierry Reding wrote: > On Thu, Feb 23, 2023 at 03:06:26PM +0530, Sumit Gupta wrote: > > On 22/02/23 09:33, Viresh Kumar wrote: > > Tried using it and got below crash. It seems to be coming because we don't > > have clocks property within CPU node for SoC's having BPMP-FW. > > > > Unable to handle kernel NULL pointer dereference at virtual address > > 000000000000002e > > .... > > Call trace: > > clk_round_rate+0x38/0xd8 > > dev_pm_opp_set_rate+0xe4/0x1a8 > > tegra194_cpufreq_set_target+0x74/0x88 > > __cpufreq_driver_target+0x154/0x250 > > cpufreq_online+0x7b4/0x9ac > > Can you try to find out what exactly is happening here? The clock > framework should be able to deal with NULL clock pointers just fine. > Although, looking at the OPP table code, it seems like we don't use > clk_get_optional(), so opp_table->clk may end up being a pointer- > encoded error. Perhaps we need something like this: > > --- >8 --- > diff --git a/drivers/opp/core.c b/drivers/opp/core.c > index e87567dbe99f..d7baeb6ac697 100644 > --- a/drivers/opp/core.c > +++ b/drivers/opp/core.c > @@ -1397,6 +1397,7 @@ static struct opp_table *_update_opp_table_clk(struct device *dev, > * frequency in opp->rates and also parse the entries in DT. > */ > opp_table->clk_count = 1; > + opp_table->clk = NULL; > > dev_dbg(dev, "%s: Couldn't find clock: %d\n", __func__, ret); > return opp_table; I didn't reply earlier as I had nothing more to say and Sumit's initial approach was correct. Maybe I should have I have clarified this then. The OPP core supports dev_pm_opp_set_rate() only for devices that can set the rate, for everything else dev_pm_opp_set_opp() is the right choice. I suggested dev_pm_opp_set_rate() earlier as I thought rate is supported here.
On 20/02/2023 15:05, Sumit Gupta wrote: > This patch series adds memory interconnect support for Tegra234 SoC. > It is used to dynamically scale DRAM Frequency as per the bandwidth > requests from different Memory Controller (MC) clients. > MC Clients use ICC Framework's icc_set_bw() api to dynamically request > for the DRAM bandwidth (BW). As per path, the request will be routed > from MC to the EMC driver. MC driver passes the request info like the > Client ID, type, and frequency request info to the BPMP-FW which will > set the final DRAM freq considering all exisiting requests. > > MC and EMC are the ICC providers. Nodes in path for a request will be: > Client[1-n] -> MC -> EMC -> EMEM/DRAM > > The patch series also adds interconnect support in below client drivers: > 1) CPUFREQ driver for scaling bandwidth with CPU frequency. For that, > added per cluster OPP table which will be used in the CPUFREQ driver > by requesting the minimum BW respective to the given CPU frequency in > the OPP table of given cluster. > 2) PCIE driver to request BW required for different modes. No dependencies or ordering written, so I am free to take memory controller bits, I assume. Best regards, Krzysztof
On 06/03/2023 16:05, Krzysztof Kozlowski wrote: > On 20/02/2023 15:05, Sumit Gupta wrote: >> This patch series adds memory interconnect support for Tegra234 SoC. >> It is used to dynamically scale DRAM Frequency as per the bandwidth >> requests from different Memory Controller (MC) clients. >> MC Clients use ICC Framework's icc_set_bw() api to dynamically request >> for the DRAM bandwidth (BW). As per path, the request will be routed >> from MC to the EMC driver. MC driver passes the request info like the >> Client ID, type, and frequency request info to the BPMP-FW which will >> set the final DRAM freq considering all exisiting requests. >> >> MC and EMC are the ICC providers. Nodes in path for a request will be: >> Client[1-n] -> MC -> EMC -> EMEM/DRAM >> >> The patch series also adds interconnect support in below client drivers: >> 1) CPUFREQ driver for scaling bandwidth with CPU frequency. For that, >> added per cluster OPP table which will be used in the CPUFREQ driver >> by requesting the minimum BW respective to the given CPU frequency in >> the OPP table of given cluster. >> 2) PCIE driver to request BW required for different modes. > > No dependencies or ordering written, so I am free to take memory > controller bits, I assume. And not.. NAK, since you decided to ignore my comments. Really, we do not have time for such useless ping pong. Best regards, Krzysztof
On 06/03/23 20:35, Krzysztof Kozlowski wrote: > External email: Use caution opening links or attachments > > > On 20/02/2023 15:05, Sumit Gupta wrote: >> This patch series adds memory interconnect support for Tegra234 SoC. >> It is used to dynamically scale DRAM Frequency as per the bandwidth >> requests from different Memory Controller (MC) clients. >> MC Clients use ICC Framework's icc_set_bw() api to dynamically request >> for the DRAM bandwidth (BW). As per path, the request will be routed >> from MC to the EMC driver. MC driver passes the request info like the >> Client ID, type, and frequency request info to the BPMP-FW which will >> set the final DRAM freq considering all exisiting requests. >> >> MC and EMC are the ICC providers. Nodes in path for a request will be: >> Client[1-n] -> MC -> EMC -> EMEM/DRAM >> >> The patch series also adds interconnect support in below client drivers: >> 1) CPUFREQ driver for scaling bandwidth with CPU frequency. For that, >> added per cluster OPP table which will be used in the CPUFREQ driver >> by requesting the minimum BW respective to the given CPU frequency in >> the OPP table of given cluster. >> 2) PCIE driver to request BW required for different modes. > > No dependencies or ordering written, so I am free to take memory > controller bits, I assume. > > Best regards, > Krzysztof > Apologies for not mentioning the order in cover letter. The patches are divided into below groups. Patch [9]: Memory Interconnect support in PCI (MC client) Patch [4-8]: Memory Interconnect support in CPUFREQ (MC client) Patch [1-3]: Memory Interconnect base support Both the Memory Controller (MC) client patches are dependent on the 'Memory Interconnect base support patch [1-3]'. Thanks, Sumit
On 06/03/23 20:37, Krzysztof Kozlowski wrote: > External email: Use caution opening links or attachments > > > On 06/03/2023 16:05, Krzysztof Kozlowski wrote: >> On 20/02/2023 15:05, Sumit Gupta wrote: >>> This patch series adds memory interconnect support for Tegra234 SoC. >>> It is used to dynamically scale DRAM Frequency as per the bandwidth >>> requests from different Memory Controller (MC) clients. >>> MC Clients use ICC Framework's icc_set_bw() api to dynamically request >>> for the DRAM bandwidth (BW). As per path, the request will be routed >>> from MC to the EMC driver. MC driver passes the request info like the >>> Client ID, type, and frequency request info to the BPMP-FW which will >>> set the final DRAM freq considering all exisiting requests. >>> >>> MC and EMC are the ICC providers. Nodes in path for a request will be: >>> Client[1-n] -> MC -> EMC -> EMEM/DRAM >>> >>> The patch series also adds interconnect support in below client drivers: >>> 1) CPUFREQ driver for scaling bandwidth with CPU frequency. For that, >>> added per cluster OPP table which will be used in the CPUFREQ driver >>> by requesting the minimum BW respective to the given CPU frequency in >>> the OPP table of given cluster. >>> 2) PCIE driver to request BW required for different modes. >> >> No dependencies or ordering written, so I am free to take memory >> controller bits, I assume. > > And not.. NAK, since you decided to ignore my comments. Really, we do > not have time for such useless ping pong. > > Best regards, > Krzysztof > Hi Krzysztof, I tried to address the comments given during review of v1 in v2. I am sorry if in case I missed any suggestion. Please let me know so I can incorporate that. Thanks, Sumit
On 06/03/2023 21:43, Sumit Gupta wrote: > > > On 06/03/23 20:37, Krzysztof Kozlowski wrote: >> External email: Use caution opening links or attachments >> >> >> On 06/03/2023 16:05, Krzysztof Kozlowski wrote: >>> On 20/02/2023 15:05, Sumit Gupta wrote: >>>> This patch series adds memory interconnect support for Tegra234 SoC. >>>> It is used to dynamically scale DRAM Frequency as per the bandwidth >>>> requests from different Memory Controller (MC) clients. >>>> MC Clients use ICC Framework's icc_set_bw() api to dynamically request >>>> for the DRAM bandwidth (BW). As per path, the request will be routed >>>> from MC to the EMC driver. MC driver passes the request info like the >>>> Client ID, type, and frequency request info to the BPMP-FW which will >>>> set the final DRAM freq considering all exisiting requests. >>>> >>>> MC and EMC are the ICC providers. Nodes in path for a request will be: >>>> Client[1-n] -> MC -> EMC -> EMEM/DRAM >>>> >>>> The patch series also adds interconnect support in below client drivers: >>>> 1) CPUFREQ driver for scaling bandwidth with CPU frequency. For that, >>>> added per cluster OPP table which will be used in the CPUFREQ driver >>>> by requesting the minimum BW respective to the given CPU frequency in >>>> the OPP table of given cluster. >>>> 2) PCIE driver to request BW required for different modes. >>> >>> No dependencies or ordering written, so I am free to take memory >>> controller bits, I assume. >> >> And not.. NAK, since you decided to ignore my comments. Really, we do >> not have time for such useless ping pong. >> >> Best regards, >> Krzysztof >> > > Hi Krzysztof, > > I tried to address the comments given during review of v1 in v2. > I am sorry if in case I missed any suggestion. Please let me know so I > can incorporate that. > I never got any feedback and my first glance suggested nothing changed. Let me check again. Best regards, Krzysztof