Message ID | 20250228-a623-gpu-support-v2-5-aea654ecc1d3@quicinc.com |
---|---|
State | Superseded |
Headers | show |
Series | Support for Adreno 623 GPU | expand |
On 2/27/25 9:07 PM, Akhil P Oommen wrote: > From: Jie Zhang <quic_jiezh@quicinc.com> > > Add gpu and gmu nodes for qcs8300 chipset. > > Signed-off-by: Jie Zhang <quic_jiezh@quicinc.com> > Signed-off-by: Akhil P Oommen <quic_akhilpo@quicinc.com> > --- [...] > + gmu: gmu@3d6a000 { > + compatible = "qcom,adreno-gmu-623.0", "qcom,adreno-gmu"; > + reg = <0x0 0x03d6a000 0x0 0x34000>, size = 0x26000 so that it doesn't leak into GPU_CC > + <0x0 0x03de0000 0x0 0x10000>, > + <0x0 0x0b290000 0x0 0x10000>; > + reg-names = "gmu", "rscc", "gmu_pdc"; > + interrupts = <GIC_SPI 304 IRQ_TYPE_LEVEL_HIGH>, > + <GIC_SPI 305 IRQ_TYPE_LEVEL_HIGH>; > + interrupt-names = "hfi", "gmu"; > + clocks = <&gpucc GPU_CC_CX_GMU_CLK>, > + <&gpucc GPU_CC_CXO_CLK>, > + <&gcc GCC_DDRSS_GPU_AXI_CLK>, > + <&gcc GCC_GPU_MEMNOC_GFX_CLK>, > + <&gpucc GPU_CC_AHB_CLK>, > + <&gpucc GPU_CC_HUB_CX_INT_CLK>, > + <&gpucc GPU_CC_HLOS1_VOTE_GPU_SMMU_CLK>; This should only be bound to the SMMU > + clock-names = "gmu", > + "cxo", > + "axi", > + "memnoc", > + "ahb", > + "hub", > + "smmu_vote"; > + power-domains = <&gpucc GPU_CC_CX_GDSC>, > + <&gpucc GPU_CC_GX_GDSC>; > + power-domain-names = "cx", > + "gx"; > + iommus = <&adreno_smmu 5 0xc00>; > + operating-points-v2 = <&gmu_opp_table>; > + > + gmu_opp_table: opp-table { > + compatible = "operating-points-v2"; > + > + opp-200000000 { > + opp-hz = /bits/ 64 <200000000>; It looks like this clock only has a 500 Mhz rate Konrad
On 4/14/2025 4:31 PM, Konrad Dybcio wrote: > On 2/27/25 9:07 PM, Akhil P Oommen wrote: >> From: Jie Zhang <quic_jiezh@quicinc.com> >> >> Add gpu and gmu nodes for qcs8300 chipset. >> >> Signed-off-by: Jie Zhang <quic_jiezh@quicinc.com> >> Signed-off-by: Akhil P Oommen <quic_akhilpo@quicinc.com> >> --- > > [...] > >> + gmu: gmu@3d6a000 { >> + compatible = "qcom,adreno-gmu-623.0", "qcom,adreno-gmu"; >> + reg = <0x0 0x03d6a000 0x0 0x34000>, > > size = 0x26000 so that it doesn't leak into GPU_CC We dump GPUCC regs into snapshot! > >> + <0x0 0x03de0000 0x0 0x10000>, >> + <0x0 0x0b290000 0x0 0x10000>; >> + reg-names = "gmu", "rscc", "gmu_pdc"; >> + interrupts = <GIC_SPI 304 IRQ_TYPE_LEVEL_HIGH>, >> + <GIC_SPI 305 IRQ_TYPE_LEVEL_HIGH>; >> + interrupt-names = "hfi", "gmu"; >> + clocks = <&gpucc GPU_CC_CX_GMU_CLK>, >> + <&gpucc GPU_CC_CXO_CLK>, >> + <&gcc GCC_DDRSS_GPU_AXI_CLK>, >> + <&gcc GCC_GPU_MEMNOC_GFX_CLK>, >> + <&gpucc GPU_CC_AHB_CLK>, >> + <&gpucc GPU_CC_HUB_CX_INT_CLK>, >> + <&gpucc GPU_CC_HLOS1_VOTE_GPU_SMMU_CLK>; > > This should only be bound to the SMMU Not sure how this sneaked in. Will remove this. > >> + clock-names = "gmu", >> + "cxo", >> + "axi", >> + "memnoc", >> + "ahb", >> + "hub", >> + "smmu_vote"; >> + power-domains = <&gpucc GPU_CC_CX_GDSC>, >> + <&gpucc GPU_CC_GX_GDSC>; >> + power-domain-names = "cx", >> + "gx"; >> + iommus = <&adreno_smmu 5 0xc00>; >> + operating-points-v2 = <&gmu_opp_table>; >> + >> + gmu_opp_table: opp-table { >> + compatible = "operating-points-v2"; >> + >> + opp-200000000 { >> + opp-hz = /bits/ 64 <200000000>; > > It looks like this clock only has a 500 Mhz rate Ack. -Akhil. > > Konrad
On 4/28/25 12:44 PM, Akhil P Oommen wrote: > On 4/14/2025 4:31 PM, Konrad Dybcio wrote: >> On 2/27/25 9:07 PM, Akhil P Oommen wrote: >>> From: Jie Zhang <quic_jiezh@quicinc.com> >>> >>> Add gpu and gmu nodes for qcs8300 chipset. >>> >>> Signed-off-by: Jie Zhang <quic_jiezh@quicinc.com> >>> Signed-off-by: Akhil P Oommen <quic_akhilpo@quicinc.com> >>> --- >> >> [...] >> >>> + gmu: gmu@3d6a000 { >>> + compatible = "qcom,adreno-gmu-623.0", "qcom,adreno-gmu"; >>> + reg = <0x0 0x03d6a000 0x0 0x34000>, >> >> size = 0x26000 so that it doesn't leak into GPU_CC > > We dump GPUCC regs into snapshot! Right, that's bad.. the dt heuristics are such that each region is mapped by a single device that it belongs to, with some rare exceptions.. Instead, the moderately dirty way would be to expose gpucc as syscon & pass it to the GPU device, or the clean way would be to implement an API within the clock framework that would dump the relevant registers Konrad
On Mon, Apr 28, 2025 at 11:19:32PM +0200, Konrad Dybcio wrote: > On 4/28/25 12:44 PM, Akhil P Oommen wrote: > > On 4/14/2025 4:31 PM, Konrad Dybcio wrote: > >> On 2/27/25 9:07 PM, Akhil P Oommen wrote: > >>> From: Jie Zhang <quic_jiezh@quicinc.com> > >>> > >>> Add gpu and gmu nodes for qcs8300 chipset. > >>> > >>> Signed-off-by: Jie Zhang <quic_jiezh@quicinc.com> > >>> Signed-off-by: Akhil P Oommen <quic_akhilpo@quicinc.com> > >>> --- > >> > >> [...] > >> > >>> + gmu: gmu@3d6a000 { > >>> + compatible = "qcom,adreno-gmu-623.0", "qcom,adreno-gmu"; > >>> + reg = <0x0 0x03d6a000 0x0 0x34000>, > >> > >> size = 0x26000 so that it doesn't leak into GPU_CC > > > > We dump GPUCC regs into snapshot! > > Right, that's bad.. the dt heuristics are such that each region > is mapped by a single device that it belongs to, with some rare > exceptions.. It has been like this for most (all?) GMU / GPUCC generations. > > Instead, the moderately dirty way would be to expose gpucc as > syscon & pass it to the GPU device, or the clean way would be > to implement an API within the clock framework that would dump > the relevant registers > > Konrad
On 4/29/25 2:17 PM, Dmitry Baryshkov wrote: > On Mon, Apr 28, 2025 at 11:19:32PM +0200, Konrad Dybcio wrote: >> On 4/28/25 12:44 PM, Akhil P Oommen wrote: >>> On 4/14/2025 4:31 PM, Konrad Dybcio wrote: >>>> On 2/27/25 9:07 PM, Akhil P Oommen wrote: >>>>> From: Jie Zhang <quic_jiezh@quicinc.com> >>>>> >>>>> Add gpu and gmu nodes for qcs8300 chipset. >>>>> >>>>> Signed-off-by: Jie Zhang <quic_jiezh@quicinc.com> >>>>> Signed-off-by: Akhil P Oommen <quic_akhilpo@quicinc.com> >>>>> --- >>>> >>>> [...] >>>> >>>>> + gmu: gmu@3d6a000 { >>>>> + compatible = "qcom,adreno-gmu-623.0", "qcom,adreno-gmu"; >>>>> + reg = <0x0 0x03d6a000 0x0 0x34000>, >>>> >>>> size = 0x26000 so that it doesn't leak into GPU_CC >>> >>> We dump GPUCC regs into snapshot! >> >> Right, that's bad.. the dt heuristics are such that each region >> is mapped by a single device that it belongs to, with some rare >> exceptions.. > > It has been like this for most (all?) GMU / GPUCC generations. Eeeeh fine, let's keep it here and fix it the next time (tm) Konrad
On Wed, Apr 30, 2025 at 3:39 AM Konrad Dybcio <konrad.dybcio@oss.qualcomm.com> wrote: > > On 4/29/25 2:17 PM, Dmitry Baryshkov wrote: > > On Mon, Apr 28, 2025 at 11:19:32PM +0200, Konrad Dybcio wrote: > >> On 4/28/25 12:44 PM, Akhil P Oommen wrote: > >>> On 4/14/2025 4:31 PM, Konrad Dybcio wrote: > >>>> On 2/27/25 9:07 PM, Akhil P Oommen wrote: > >>>>> From: Jie Zhang <quic_jiezh@quicinc.com> > >>>>> > >>>>> Add gpu and gmu nodes for qcs8300 chipset. > >>>>> > >>>>> Signed-off-by: Jie Zhang <quic_jiezh@quicinc.com> > >>>>> Signed-off-by: Akhil P Oommen <quic_akhilpo@quicinc.com> > >>>>> --- > >>>> > >>>> [...] > >>>> > >>>>> + gmu: gmu@3d6a000 { > >>>>> + compatible = "qcom,adreno-gmu-623.0", "qcom,adreno-gmu"; > >>>>> + reg = <0x0 0x03d6a000 0x0 0x34000>, > >>>> > >>>> size = 0x26000 so that it doesn't leak into GPU_CC > >>> > >>> We dump GPUCC regs into snapshot! > >> > >> Right, that's bad.. the dt heuristics are such that each region > >> is mapped by a single device that it belongs to, with some rare > >> exceptions.. > > > > It has been like this for most (all?) GMU / GPUCC generations. > > Eeeeh fine, let's keep it here and fix it the next time (tm) Maybe it would be reasonable to add a comment about this _somewhere_? (Bindings doc?) I feel like this confusion has come up before. Maybe it is a bit "ugly" but since gmu is directly banging on gpucc, it doesn't seem completely inappropriate. BR, -R
On 5/1/2025 12:10 AM, Rob Clark wrote: > On Wed, Apr 30, 2025 at 3:39 AM Konrad Dybcio > <konrad.dybcio@oss.qualcomm.com> wrote: >> >> On 4/29/25 2:17 PM, Dmitry Baryshkov wrote: >>> On Mon, Apr 28, 2025 at 11:19:32PM +0200, Konrad Dybcio wrote: >>>> On 4/28/25 12:44 PM, Akhil P Oommen wrote: >>>>> On 4/14/2025 4:31 PM, Konrad Dybcio wrote: >>>>>> On 2/27/25 9:07 PM, Akhil P Oommen wrote: >>>>>>> From: Jie Zhang <quic_jiezh@quicinc.com> >>>>>>> >>>>>>> Add gpu and gmu nodes for qcs8300 chipset. >>>>>>> >>>>>>> Signed-off-by: Jie Zhang <quic_jiezh@quicinc.com> >>>>>>> Signed-off-by: Akhil P Oommen <quic_akhilpo@quicinc.com> >>>>>>> --- >>>>>> >>>>>> [...] >>>>>> >>>>>>> + gmu: gmu@3d6a000 { >>>>>>> + compatible = "qcom,adreno-gmu-623.0", "qcom,adreno-gmu"; >>>>>>> + reg = <0x0 0x03d6a000 0x0 0x34000>, >>>>>> >>>>>> size = 0x26000 so that it doesn't leak into GPU_CC >>>>> >>>>> We dump GPUCC regs into snapshot! >>>> >>>> Right, that's bad.. the dt heuristics are such that each region >>>> is mapped by a single device that it belongs to, with some rare >>>> exceptions.. >>> >>> It has been like this for most (all?) GMU / GPUCC generations. >> >> Eeeeh fine, let's keep it here and fix it the next time (tm) > > Maybe it would be reasonable to add a comment about this _somewhere_? > (Bindings doc?) I feel like this confusion has come up before. Maybe > it is a bit "ugly" but since gmu is directly banging on gpucc, it > doesn't seem completely inappropriate. That's right. This is a shared region between Linux clk driver and GMU firmware's clock driver. -Akhil. > > BR, > -R
diff --git a/arch/arm64/boot/dts/qcom/qcs8300.dtsi b/arch/arm64/boot/dts/qcom/qcs8300.dtsi index f1c90db7b0e689035fbbaaa551611be34adf9ab6..2dc487dcc584cd0a057e18c53e2f945b8636ad14 100644 --- a/arch/arm64/boot/dts/qcom/qcs8300.dtsi +++ b/arch/arm64/boot/dts/qcom/qcs8300.dtsi @@ -2660,6 +2660,99 @@ serdes0: phy@8909000 { status = "disabled"; }; + gpu: gpu@3d00000 { + compatible = "qcom,adreno-623.0", "qcom,adreno"; + reg = <0x0 0x03d00000 0x0 0x40000>, + <0x0 0x03d9e000 0x0 0x1000>, + <0x0 0x03d61000 0x0 0x800>; + reg-names = "kgsl_3d0_reg_memory", + "cx_mem", + "cx_dbgc"; + interrupts = <GIC_SPI 300 IRQ_TYPE_LEVEL_HIGH>; + iommus = <&adreno_smmu 0 0xc00>, + <&adreno_smmu 1 0xc00>; + operating-points-v2 = <&gpu_opp_table>; + qcom,gmu = <&gmu>; + interconnects = <&gem_noc MASTER_GFX3D QCOM_ICC_TAG_ALWAYS + &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>; + interconnect-names = "gfx-mem"; + #cooling-cells = <2>; + + status = "disabled"; + + gpu_zap_shader: zap-shader { + memory-region = <&gpu_microcode_mem>; + }; + + gpu_opp_table: opp-table { + compatible = "operating-points-v2"; + + opp-877000000 { + opp-hz = /bits/ 64 <877000000>; + opp-level = <RPMH_REGULATOR_LEVEL_TURBO_L1>; + opp-peak-kBps = <12484375>; + }; + + opp-780000000 { + opp-hz = /bits/ 64 <780000000>; + opp-level = <RPMH_REGULATOR_LEVEL_TURBO>; + opp-peak-kBps = <10687500>; + }; + + opp-599000000 { + opp-hz = /bits/ 64 <599000000>; + opp-level = <RPMH_REGULATOR_LEVEL_NOM>; + opp-peak-kBps = <8171875>; + }; + + opp-479000000 { + opp-hz = /bits/ 64 <479000000>; + opp-level = <RPMH_REGULATOR_LEVEL_SVS_L1>; + opp-peak-kBps = <5285156>; + }; + }; + }; + + gmu: gmu@3d6a000 { + compatible = "qcom,adreno-gmu-623.0", "qcom,adreno-gmu"; + reg = <0x0 0x03d6a000 0x0 0x34000>, + <0x0 0x03de0000 0x0 0x10000>, + <0x0 0x0b290000 0x0 0x10000>; + reg-names = "gmu", "rscc", "gmu_pdc"; + interrupts = <GIC_SPI 304 IRQ_TYPE_LEVEL_HIGH>, + <GIC_SPI 305 IRQ_TYPE_LEVEL_HIGH>; + interrupt-names = "hfi", "gmu"; + clocks = <&gpucc GPU_CC_CX_GMU_CLK>, + <&gpucc GPU_CC_CXO_CLK>, + <&gcc GCC_DDRSS_GPU_AXI_CLK>, + <&gcc GCC_GPU_MEMNOC_GFX_CLK>, + <&gpucc GPU_CC_AHB_CLK>, + <&gpucc GPU_CC_HUB_CX_INT_CLK>, + <&gpucc GPU_CC_HLOS1_VOTE_GPU_SMMU_CLK>; + clock-names = "gmu", + "cxo", + "axi", + "memnoc", + "ahb", + "hub", + "smmu_vote"; + power-domains = <&gpucc GPU_CC_CX_GDSC>, + <&gpucc GPU_CC_GX_GDSC>; + power-domain-names = "cx", + "gx"; + iommus = <&adreno_smmu 5 0xc00>; + operating-points-v2 = <&gmu_opp_table>; + + gmu_opp_table: opp-table { + compatible = "operating-points-v2"; + + opp-200000000 { + opp-hz = /bits/ 64 <200000000>; + opp-level = <RPMH_REGULATOR_LEVEL_MIN_SVS>; + }; + }; + }; + gpucc: clock-controller@3d90000 { compatible = "qcom,qcs8300-gpucc"; reg = <0x0 0x03d90000 0x0 0xa000>;