From patchwork Wed Aug 23 12:55:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Dybcio X-Patchwork-Id: 716264 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5CD7EE49A3 for ; Wed, 23 Aug 2023 12:56:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230505AbjHWM4I (ORCPT ); Wed, 23 Aug 2023 08:56:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55586 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235295AbjHWM4H (ORCPT ); Wed, 23 Aug 2023 08:56:07 -0400 Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com [IPv6:2a00:1450:4864:20::22f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D5724E70 for ; Wed, 23 Aug 2023 05:56:04 -0700 (PDT) Received: by mail-lj1-x22f.google.com with SMTP id 38308e7fff4ca-2bbac8ec902so86395041fa.1 for ; Wed, 23 Aug 2023 05:56:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1692795363; x=1693400163; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=76F0+BldbGsR8SBANuEdRlfBFJURUSsOZA3YGl1lILM=; b=d339VLvxc2xBNXBI8TejX4+7NlBPgSgXVKhUFkuogzXuNf3WwQTKzIAyWkQgRtgz6N F7nO/fEUrmgFIeo4tJnf/6mqjrtMs6EiPgcsAvCKIBPLvAEwgxvhcjgIvX0xgxLgoJ4b F9Yg1JFbIVjN4Mu8A/asqRvCS4j03LFv5r3pbAxQ+V8uFfzHRPhLneZ0e0ErsMmUAB63 aZno/46a6gGFUCY6VmV/9Xtukor6YiYNmKM8xRQUPRB6qj6UUg516rw/6DLP6mR16WDr GulW70kLB5EU9B8Y9/Xsk5u4XN/yFdUCKpmg6Sih8alZ6180UalioQ2FTxlSrUr60ds1 n7eA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692795363; x=1693400163; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=76F0+BldbGsR8SBANuEdRlfBFJURUSsOZA3YGl1lILM=; b=MNcwWxRW/Qp20kzyby6THS60Yy4W3pdcHVHkPdsluuULkhuEP6hhXYJBGgIUMr/veD JNC4WQbFPxmC/iL2a1Pz0LDirkCvxRXPl7VXfG0ZVMEzm3yFChznlYuDzn/wVeGVi/Ua jx+Hur00qnHZiUw/BbwbjoeHvNIXsJAOzFW9cg2zTntyxxXAPCRIaNAZ2yGufUzXJzML oR5M/elR6DVu13yyX10Paqh39ltoZV2lhS/evNLMb+7JCccd+c8myMgILSQ77ylcK+B4 IXnEXOX5vdWe59afs96GEh2Mwi3Eb73yVnS08rDiSVVk4drTOaK9TKZ6r4itMaEI0+lA 4VtQ== X-Gm-Message-State: AOJu0Yz26e5m1L58MWqRaKHQCb7LinLvkhiKz9TW9rkFN0H55Nbwj5Na SKFg7xhftXUXrI8nv1kT0rWjlQ== X-Google-Smtp-Source: AGHT+IHvFs4X24Epg1IVK3P4hu0nrPCKXp4OGPnYYYUa4cw2jBs/S2XC4r6kM1PdFztlZ+eeZzI5cQ== X-Received: by 2002:a05:651c:22f:b0:2b6:e2c1:6cda with SMTP id z15-20020a05651c022f00b002b6e2c16cdamr9054421ljn.46.1692795363222; Wed, 23 Aug 2023 05:56:03 -0700 (PDT) Received: from [192.168.1.101] (abyj76.neoplus.adsl.tpnet.pl. [83.9.29.76]) by smtp.gmail.com with ESMTPSA id a18-20020a05651c011200b002b6db0ed72fsm3220256ljb.48.2023.08.23.05.56.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Aug 2023 05:56:02 -0700 (PDT) From: Konrad Dybcio Date: Wed, 23 Aug 2023 14:55:55 +0200 Subject: [PATCH v3 02/10] dt-bindings: display/msm/gmu: Allow passing QMP handle MIME-Version: 1.0 Message-Id: <20230628-topic-a7xx_drmmsm-v3-2-4ee67ccbaf9d@linaro.org> References: <20230628-topic-a7xx_drmmsm-v3-0-4ee67ccbaf9d@linaro.org> In-Reply-To: <20230628-topic-a7xx_drmmsm-v3-0-4ee67ccbaf9d@linaro.org> To: Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Bjorn Andersson Cc: Marijn Suijten , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, Konrad Dybcio , Neil Armstrong , Krzysztof Kozlowski X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1692795358; l=1312; i=konrad.dybcio@linaro.org; s=20230215; h=from:subject:message-id; bh=f8Ks5qBU/xHwVpGLbBaIcN6/JSh0hSEZ3lbJ3m6GXGI=; b=+zsPrPBNgU+UvUpCRZBUkV0XrbEyNRiLu6ro/qGbRpWTLJlZS6Y+u4gk1SS/eIS7MgeDRNY3+ tOUUvSeoUiNARNIfE4tGHdknPC91GZH4QxCovczOScI7ZuIXbmX0ECE X-Developer-Key: i=konrad.dybcio@linaro.org; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org When booting the GMU, the QMP mailbox should be pinged about some tunables (e.g. adaptive clock distribution state). To achieve that, a reference to it is necessary. Allow it and require it with A730. Tested-by: Neil Armstrong # on SM8550-QRD Tested-by: Dmitry Baryshkov # sm8450 Acked-by: Krzysztof Kozlowski Signed-off-by: Konrad Dybcio --- Documentation/devicetree/bindings/display/msm/gmu.yaml | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/Documentation/devicetree/bindings/display/msm/gmu.yaml b/Documentation/devicetree/bindings/display/msm/gmu.yaml index 20ddb89a4500..e132dbff3c4a 100644 --- a/Documentation/devicetree/bindings/display/msm/gmu.yaml +++ b/Documentation/devicetree/bindings/display/msm/gmu.yaml @@ -64,6 +64,10 @@ properties: iommus: maxItems: 1 + qcom,qmp: + $ref: /schemas/types.yaml#/definitions/phandle + description: Reference to the AOSS side-channel message RAM + operating-points-v2: true opp-table: @@ -251,6 +255,9 @@ allOf: - const: hub - const: demet + required: + - qcom,qmp + - if: properties: compatible: From patchwork Wed Aug 23 12:55:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Dybcio X-Patchwork-Id: 716263 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78A8DEE49BA for ; Wed, 23 Aug 2023 12:56:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235306AbjHWM4J (ORCPT ); Wed, 23 Aug 2023 08:56:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55622 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235295AbjHWM4J (ORCPT ); Wed, 23 Aug 2023 08:56:09 -0400 Received: from mail-lj1-x22b.google.com (mail-lj1-x22b.google.com [IPv6:2a00:1450:4864:20::22b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68A79E52 for ; Wed, 23 Aug 2023 05:56:06 -0700 (PDT) Received: by mail-lj1-x22b.google.com with SMTP id 38308e7fff4ca-2b9a2033978so86350691fa.0 for ; Wed, 23 Aug 2023 05:56:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1692795364; x=1693400164; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=Ir9r9aQtaSZLleag9LnQwnbzzTQdAeEYrkqtlWNxaqg=; b=ajMXmceZDoyg+scvMk5qi+dV0GmCiLbu2LzxlSZ7eftqRXJovgb69uzkPH97S2TFm1 nwUQXUP4pRIIMWUCMnpLOTFcjzt1zTNul/U66tJH6PiVzMzmEoFk4+NWJaTD3wrv5Dq3 6zoDUVhp0ytIEC7D43M5ZZ6b8WqMY3IBcihxWXkmgm+EP6hYKPEJ3er9SDJMI20f7VNi HNoKlW4cCek/+AXYGbHhhT7JlVO0PE9uSWNZ/N/q/Hw6FQmLDQyOS7gTHyJCzc0VaA2p 709cnAFbeR6arKE+FzO3tSl2M3/iJu/GjmpbMXw1fEApBbS6OjmSd8YUhGTlDGSP0r8o AFMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692795364; x=1693400164; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ir9r9aQtaSZLleag9LnQwnbzzTQdAeEYrkqtlWNxaqg=; b=PNHoMgcOiMo7nC0nn0fN+nsRa7MQf2KWbdG6H1fqNCy5hWfbzBPYwk6Esg8/KD3/0h B3fdj4n/LIQybj2ndxe7XcBbVADOm1Mg39qcA8vktcWv354ddRqEEcLwxNJgUDgcLIkG HgveUh4I+U4ffxIKzZwDrGgeLQL3+4vbA647QYPXN8uacm/xFoqr8eoOo5TBAYzlSb0v DUJuhovBtuonaq30TVYVUTtiPdYqPsxjF4sk4RtzNqhD6LsA8fNIsdgx+Od2TF0sCfTe kSlz2CHYexOiCUjFObI8P9JSpSy2dvVC1eiwQiGsHrDG7FXeQY5texiIvv1A1IwfQBUN mo0A== X-Gm-Message-State: AOJu0Yyy3BNGMGaEyCkMO6S/2QrK88bePyeH7XmJD89oIuTqLnmSt6MZ kaS/z+KFtac1wWIYAgrYYO2x6w== X-Google-Smtp-Source: AGHT+IFl1rfVRaXSJJRGubQ9VaBkyAmPY82CjLYb9ZYARysR6Ly8QbCAa4RhFSPjjt3Z1NYiNBt/rw== X-Received: by 2002:a2e:9e42:0:b0:2b9:e53f:e201 with SMTP id g2-20020a2e9e42000000b002b9e53fe201mr8660457ljk.31.1692795364645; Wed, 23 Aug 2023 05:56:04 -0700 (PDT) Received: from [192.168.1.101] (abyj76.neoplus.adsl.tpnet.pl. [83.9.29.76]) by smtp.gmail.com with ESMTPSA id a18-20020a05651c011200b002b6db0ed72fsm3220256ljb.48.2023.08.23.05.56.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Aug 2023 05:56:04 -0700 (PDT) From: Konrad Dybcio Date: Wed, 23 Aug 2023 14:55:56 +0200 Subject: [PATCH v3 03/10] dt-bindings: display/msm/gpu: Allow A7xx SKUs MIME-Version: 1.0 Message-Id: <20230628-topic-a7xx_drmmsm-v3-3-4ee67ccbaf9d@linaro.org> References: <20230628-topic-a7xx_drmmsm-v3-0-4ee67ccbaf9d@linaro.org> In-Reply-To: <20230628-topic-a7xx_drmmsm-v3-0-4ee67ccbaf9d@linaro.org> To: Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Bjorn Andersson Cc: Marijn Suijten , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, Konrad Dybcio , Krzysztof Kozlowski , Neil Armstrong X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1692795358; l=1566; i=konrad.dybcio@linaro.org; s=20230215; h=from:subject:message-id; bh=VV4z2afr+St9/dA+OjbMmt+l3E+tCjarurRFVAL3RG8=; b=kAV6L1e0d21BQjDFKY/r0RqAH7k0AQOD+l/6KhIl/NxRosdQ2A985bZCVCnTd1eoR16XqTE5P WnWSH0DmYi3DtSpKFlZF1/hfISm6OlK9gim8ddwQwn1kIjULpxGRXGc X-Developer-Key: i=konrad.dybcio@linaro.org; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Allow A7xx SKUs, such as the A730 GPU found on SM8450 and friends. They use GMU for all things DVFS, just like most A6xx GPUs. Reviewed-by: Krzysztof Kozlowski Tested-by: Neil Armstrong # on SM8550-QRD Tested-by: Dmitry Baryshkov # sm8450 Signed-off-by: Konrad Dybcio --- Documentation/devicetree/bindings/display/msm/gpu.yaml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/Documentation/devicetree/bindings/display/msm/gpu.yaml b/Documentation/devicetree/bindings/display/msm/gpu.yaml index 56b9b247e8c2..b019db954793 100644 --- a/Documentation/devicetree/bindings/display/msm/gpu.yaml +++ b/Documentation/devicetree/bindings/display/msm/gpu.yaml @@ -23,7 +23,7 @@ properties: The driver is parsing the compat string for Adreno to figure out the gpu-id and patch level. items: - - pattern: '^qcom,adreno-[3-6][0-9][0-9]\.[0-9]$' + - pattern: '^qcom,adreno-[3-7][0-9][0-9]\.[0-9]$' - const: qcom,adreno - description: | The driver is parsing the compat string for Imageon to @@ -203,7 +203,7 @@ allOf: properties: compatible: contains: - pattern: '^qcom,adreno-6[0-9][0-9]\.[0-9]$' + pattern: '^qcom,adreno-[67][0-9][0-9]\.[0-9]$' then: # Starting with A6xx, the clocks are usually defined in the GMU node properties: From patchwork Wed Aug 23 12:55:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Dybcio X-Patchwork-Id: 716262 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CED0EE49A0 for ; Wed, 23 Aug 2023 12:56:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235307AbjHWM4Z (ORCPT ); Wed, 23 Aug 2023 08:56:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52282 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235332AbjHWM4T (ORCPT ); Wed, 23 Aug 2023 08:56:19 -0400 Received: from mail-lj1-x233.google.com (mail-lj1-x233.google.com [IPv6:2a00:1450:4864:20::233]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5CDFE7C for ; Wed, 23 Aug 2023 05:56:09 -0700 (PDT) Received: by mail-lj1-x233.google.com with SMTP id 38308e7fff4ca-2b9c907bc68so92905021fa.2 for ; Wed, 23 Aug 2023 05:56:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1692795368; x=1693400168; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=mE9qL8qw/jRRgIoKgV5WOe9s6wrZ33dqnxt9XD/en+k=; b=W0p7kHpmT+ft6QS/YB4uhz0T75K+xn9pIRRb/BZMjXr8o0ZeLSw+9BRpKqb6EWm+pS muwi+fqsYw5702VFtOXM2Bqgyk4kzAQXQGiZkzJ8wIsLrHH/AV6/ZR7lWgJvreieYZ7s OZhUstgMSt4hy4pWEfxkuHNGIcB/XS0dPEljvCfp4XporwkPACTkeaNFehxOJtBiAP3l 0N/QrEQYKY3S72Y1aadr11PoXs2cdowclfU1xrW7Np2+S2t3QsJs3UQGKrSTA/7Zz+nR o4gmKMG9NVtOjBV6AoDYiqkAZre3Z6zldAUPd75tvJDD+Cfr+Hek004shBuRygXXE6ix 6E5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692795368; x=1693400168; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mE9qL8qw/jRRgIoKgV5WOe9s6wrZ33dqnxt9XD/en+k=; b=IUQVA/J02GclDuoz7FsEOkC3+F14BMWrxfDGV6s6ptP4nZZm5jvDchmK3kFyJl0mpO GWhB/hkCWAnHuJoneIqFPmFA2pppDDS34Sum+zePHoGB2b6gw/FAwfqybjNn+D45dfY4 2XTMfGKDBgRILvXacAbTgF5+XT9WIBFwlpUE2+Iq0gYfwMRKpkcXK02q93AQzfli+X4R EcG5dCPiGidGdWkmd0gJwPjoGEhxEfOzgbSDm1IjKIujJhE7lk0JXbz1O2feM0uS3jIh bYtMG+nLcYGf0VUqULwvizJi2GDR5BXzagkwPpM031NHJUd36//6AUznREoGfinERyM/ 5Yow== X-Gm-Message-State: AOJu0Yxt9Mog9ZtINTiI5TN2b4bUBUDK/hqWNu9FrfNFEuhPbfzSsr5m XnQe1Lq1jGRkOdDcOqI+vGmxlw== X-Google-Smtp-Source: AGHT+IHXyb/qx8jz7wgPF59D4UFKdAOzpLekSR9BTWv5Q+pjZCgXIwhqN/ia50yoR4tkDj34bNB7UQ== X-Received: by 2002:a2e:9483:0:b0:2b6:bb21:8d74 with SMTP id c3-20020a2e9483000000b002b6bb218d74mr9089392ljh.1.1692795367955; Wed, 23 Aug 2023 05:56:07 -0700 (PDT) Received: from [192.168.1.101] (abyj76.neoplus.adsl.tpnet.pl. [83.9.29.76]) by smtp.gmail.com with ESMTPSA id a18-20020a05651c011200b002b6db0ed72fsm3220256ljb.48.2023.08.23.05.56.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Aug 2023 05:56:07 -0700 (PDT) From: Konrad Dybcio Date: Wed, 23 Aug 2023 14:55:58 +0200 Subject: [PATCH v3 05/10] drm/msm/a6xx: Add skeleton A7xx support MIME-Version: 1.0 Message-Id: <20230628-topic-a7xx_drmmsm-v3-5-4ee67ccbaf9d@linaro.org> References: <20230628-topic-a7xx_drmmsm-v3-0-4ee67ccbaf9d@linaro.org> In-Reply-To: <20230628-topic-a7xx_drmmsm-v3-0-4ee67ccbaf9d@linaro.org> To: Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Bjorn Andersson Cc: Marijn Suijten , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, Konrad Dybcio , Neil Armstrong X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1692795358; l=37182; i=konrad.dybcio@linaro.org; s=20230215; h=from:subject:message-id; bh=ruOmLlrEKB4clyXmIWPo7vv/uchSsCRNjkOL/YBXf3U=; b=lczgi/QOlV+C2KNHsQ4NvmiEFh6Xn/glaMyD4X+71fHJwUWdKD4YY8zHvYBnQ6YOv+MCVv2YS BEzdumDfBdPDzVkaitoVP1j67UCSUi/K7tYK7lhzNySmz1POh9pFHbh X-Developer-Key: i=konrad.dybcio@linaro.org; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org A7xx GPUs are - from kernel's POV anyway - basically another generation of A6xx. They build upon the A650/A660_family advancements, skipping some writes (presumably more values are preset correctly on reset), adding some new ones and changing others. One notable difference is the introduction of a second shadow, called BV. To handle this with the current code, allocate it right after the current RPTR shadow. BV handling and .submit are mostly based on Jonathan Marek's work. All A7xx GPUs are assumed to have a GMU. A702 is not an A7xx-class GPU, it's a weird forked A610. Tested-by: Neil Armstrong # on SM8550-QRD Tested-by: Dmitry Baryshkov # sm8450 Signed-off-by: Konrad Dybcio --- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 95 +++++-- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 451 ++++++++++++++++++++++++++++---- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 1 + drivers/gpu/drm/msm/adreno/adreno_gpu.h | 10 +- drivers/gpu/drm/msm/msm_ringbuffer.h | 2 + 5 files changed, 478 insertions(+), 81 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 03fa89bf3e4b..75984260898e 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -200,9 +200,10 @@ int a6xx_gmu_wait_for_idle(struct a6xx_gmu *gmu) static int a6xx_gmu_start(struct a6xx_gmu *gmu) { + struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu); + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; + u32 mask, reset_val, val; int ret; - u32 val; - u32 mask, reset_val; val = gmu_read(gmu, REG_A6XX_GMU_CM3_DTCM_START + 0xff8); if (val <= 0x20010004) { @@ -218,7 +219,11 @@ static int a6xx_gmu_start(struct a6xx_gmu *gmu) /* Set the log wptr index * note: downstream saves the value in poweroff and restores it here */ - gmu_write(gmu, REG_A6XX_GPU_GMU_CX_GMU_PWR_COL_CP_RESP, 0); + if (adreno_is_a7xx(adreno_gpu)) + gmu_write(gmu, REG_A6XX_GMU_GENERAL_9, 0); + else + gmu_write(gmu, REG_A6XX_GPU_GMU_CX_GMU_PWR_COL_CP_RESP, 0); + gmu_write(gmu, REG_A6XX_GMU_CM3_SYSRESET, 0); @@ -518,7 +523,9 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) if (IS_ERR(pdcptr)) goto err; - if (adreno_is_a650(adreno_gpu) || adreno_is_a660_family(adreno_gpu)) + if (adreno_is_a650(adreno_gpu) || + adreno_is_a660_family(adreno_gpu) || + adreno_is_a7xx(adreno_gpu)) pdc_in_aop = true; else if (adreno_is_a618(adreno_gpu) || adreno_is_a640_family(adreno_gpu)) pdc_address_offset = 0x30090; @@ -550,7 +557,8 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) gmu_write_rscc(gmu, REG_A6XX_RSCC_PDC_MATCH_VALUE_HI, 0x4514); /* Load RSC sequencer uCode for sleep and wakeup */ - if (adreno_is_a650_family(adreno_gpu)) { + if (adreno_is_a650_family(adreno_gpu) || + adreno_is_a7xx(adreno_gpu)) { gmu_write_rscc(gmu, REG_A6XX_RSCC_SEQ_MEM_0_DRV0, 0xeaaae5a0); gmu_write_rscc(gmu, REG_A6XX_RSCC_SEQ_MEM_0_DRV0 + 1, 0xe1a1ebab); gmu_write_rscc(gmu, REG_A6XX_RSCC_SEQ_MEM_0_DRV0 + 2, 0xa2e0a581); @@ -635,11 +643,18 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) /* Set up the idle state for the GMU */ static void a6xx_gmu_power_config(struct a6xx_gmu *gmu) { + struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu); + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; + /* Disable GMU WB/RB buffer */ gmu_write(gmu, REG_A6XX_GMU_SYS_BUS_CONFIG, 0x1); gmu_write(gmu, REG_A6XX_GMU_ICACHE_CONFIG, 0x1); gmu_write(gmu, REG_A6XX_GMU_DCACHE_CONFIG, 0x1); + /* A7xx knows better by default! */ + if (adreno_is_a7xx(adreno_gpu)) + return; + gmu_write(gmu, REG_A6XX_GMU_PWR_COL_INTER_FRAME_CTRL, 0x9c40400); switch (gmu->idle_level) { @@ -702,7 +717,7 @@ static int a6xx_gmu_fw_load(struct a6xx_gmu *gmu) u32 itcm_base = 0x00000000; u32 dtcm_base = 0x00040000; - if (adreno_is_a650_family(adreno_gpu)) + if (adreno_is_a650_family(adreno_gpu) || adreno_is_a7xx(adreno_gpu)) dtcm_base = 0x10004000; if (gmu->legacy) { @@ -751,14 +766,22 @@ static int a6xx_gmu_fw_start(struct a6xx_gmu *gmu, unsigned int state) { struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu); struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; + u32 fence_range_lower, fence_range_upper; int ret; u32 chipid; - if (adreno_is_a650_family(adreno_gpu)) { + /* Vote veto for FAL10 */ + if (adreno_is_a650_family(adreno_gpu) || adreno_is_a7xx(adreno_gpu)) { gmu_write(gmu, REG_A6XX_GPU_GMU_CX_GMU_CX_FALNEXT_INTF, 1); gmu_write(gmu, REG_A6XX_GPU_GMU_CX_GMU_CX_FAL_INTF, 1); } + /* Turn on TCM (Tightly Coupled Memory) retention */ + if (adreno_is_a7xx(adreno_gpu)) + a6xx_llc_write(a6xx_gpu, REG_A7XX_CX_MISC_TCM_RET_CNTL, 1); + else + gmu_write(gmu, REG_A6XX_GMU_GENERAL_7, 1); + if (state == GMU_WARM_BOOT) { ret = a6xx_rpmh_start(gmu); if (ret) @@ -768,9 +791,6 @@ static int a6xx_gmu_fw_start(struct a6xx_gmu *gmu, unsigned int state) "GMU firmware is not loaded\n")) return -ENOENT; - /* Turn on register retention */ - gmu_write(gmu, REG_A6XX_GMU_GENERAL_7, 1); - ret = a6xx_rpmh_start(gmu); if (ret) return ret; @@ -780,6 +800,7 @@ static int a6xx_gmu_fw_start(struct a6xx_gmu *gmu, unsigned int state) return ret; } + /* Clear init result to make sure we are getting a fresh value */ gmu_write(gmu, REG_A6XX_GMU_CM3_FW_INIT_RESULT, 0); gmu_write(gmu, REG_A6XX_GMU_CM3_BOOT_CONFIG, 0x02); @@ -787,8 +808,18 @@ static int a6xx_gmu_fw_start(struct a6xx_gmu *gmu, unsigned int state) gmu_write(gmu, REG_A6XX_GMU_HFI_QTBL_ADDR, gmu->hfi.iova); gmu_write(gmu, REG_A6XX_GMU_HFI_QTBL_INFO, 1); + if (adreno_is_a7xx(adreno_gpu)) { + fence_range_upper = 0x32; + fence_range_lower = 0x8a0; + } else { + fence_range_upper = 0xa; + fence_range_lower = 0xa0; + } + gmu_write(gmu, REG_A6XX_GMU_AHB_FENCE_RANGE_0, - (1 << 31) | (0xa << 18) | (0xa0)); + BIT(31) | + FIELD_PREP(GENMASK(30, 18), fence_range_upper) | + FIELD_PREP(GENMASK(17, 0), fence_range_lower)); /* * Snapshots toggle the NMI bit which will result in a jump to the NMI @@ -807,10 +838,17 @@ static int a6xx_gmu_fw_start(struct a6xx_gmu *gmu, unsigned int state) chipid |= (adreno_gpu->chip_id << 4) & 0xf000; /* minor */ chipid |= (adreno_gpu->chip_id << 8) & 0x0f00; /* patchid */ - gmu_write(gmu, REG_A6XX_GMU_HFI_SFR_ADDR, chipid); + if (adreno_is_a7xx(adreno_gpu)) { + gmu_write(gmu, REG_A6XX_GMU_GENERAL_10, chipid); + gmu_write(gmu, REG_A6XX_GMU_GENERAL_8, + (gmu->log.iova & GENMASK(31, 12)) | + ((gmu->log.size / SZ_4K - 1) & GENMASK(7, 0))); + } else { + gmu_write(gmu, REG_A6XX_GMU_HFI_SFR_ADDR, chipid); - gmu_write(gmu, REG_A6XX_GPU_GMU_CX_GMU_PWR_COL_CP_MSG, - gmu->log.iova | (gmu->log.size / SZ_4K - 1)); + gmu_write(gmu, REG_A6XX_GPU_GMU_CX_GMU_PWR_COL_CP_MSG, + gmu->log.iova | (gmu->log.size / SZ_4K - 1)); + } /* Set up the lowest idle level on the GMU */ a6xx_gmu_power_config(gmu); @@ -984,15 +1022,19 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu) enable_irq(gmu->gmu_irq); /* Check to see if we are doing a cold or warm boot */ - status = gmu_read(gmu, REG_A6XX_GMU_GENERAL_7) == 1 ? - GMU_WARM_BOOT : GMU_COLD_BOOT; - - /* - * Warm boot path does not work on newer GPUs - * Presumably this is because icache/dcache regions must be restored - */ - if (!gmu->legacy) + if (adreno_is_a7xx(adreno_gpu)) { + status = a6xx_llc_read(a6xx_gpu, REG_A7XX_CX_MISC_TCM_RET_CNTL) == 1 ? + GMU_WARM_BOOT : GMU_COLD_BOOT; + } else if (gmu->legacy) { + status = gmu_read(gmu, REG_A6XX_GMU_GENERAL_7) == 1 ? + GMU_WARM_BOOT : GMU_COLD_BOOT; + } else { + /* + * Warm boot path does not work on newer A6xx GPUs + * Presumably this is because icache/dcache regions must be restored + */ status = GMU_COLD_BOOT; + } ret = a6xx_gmu_fw_start(gmu, status); if (ret) @@ -1604,7 +1646,8 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) * are otherwise unused by a660. */ gmu->dummy.size = SZ_4K; - if (adreno_is_a660_family(adreno_gpu)) { + if (adreno_is_a660_family(adreno_gpu) || + adreno_is_a7xx(adreno_gpu)) { ret = a6xx_gmu_memory_alloc(gmu, &gmu->debug, SZ_4K * 7, 0x60400000, "debug"); if (ret) @@ -1620,7 +1663,8 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) goto err_memory; /* Note that a650 family also includes a660 family: */ - if (adreno_is_a650_family(adreno_gpu)) { + if (adreno_is_a650_family(adreno_gpu) || + adreno_is_a7xx(adreno_gpu)) { ret = a6xx_gmu_memory_alloc(gmu, &gmu->icache, SZ_16M - SZ_16K, 0x04000, "icache"); if (ret) @@ -1668,7 +1712,8 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) goto err_memory; } - if (adreno_is_a650_family(adreno_gpu)) { + if (adreno_is_a650_family(adreno_gpu) || + adreno_is_a7xx(adreno_gpu)) { gmu->rscc = a6xx_gmu_get_mmio(pdev, "rscc"); if (IS_ERR(gmu->rscc)) { ret = -ENODEV; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index d4e85e24002f..61ce8d053355 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -103,6 +103,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, struct msm_ringbuffer *ring, struct msm_file_private *ctx) { bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1; + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; phys_addr_t ttbr; u32 asid; u64 memptr = rbmemptr(ring, ttbr0); @@ -114,9 +115,11 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, return; if (!sysprof) { - /* Turn off protected mode to write to special registers */ - OUT_PKT7(ring, CP_SET_PROTECTED_MODE, 1); - OUT_RING(ring, 0); + if (!adreno_is_a7xx(adreno_gpu)) { + /* Turn off protected mode to write to special registers */ + OUT_PKT7(ring, CP_SET_PROTECTED_MODE, 1); + OUT_RING(ring, 0); + } OUT_PKT4(ring, REG_A6XX_RBBM_PERFCTR_SRAM_INIT_CMD, 1); OUT_RING(ring, 1); @@ -141,6 +144,16 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, OUT_RING(ring, lower_32_bits(ttbr)); OUT_RING(ring, (asid << 16) | upper_32_bits(ttbr)); + /* + * Sync both threads after switching pagetables and enable BR only + * to make sure BV doesn't race ahead while BR is still switching + * pagetables. + */ + if (adreno_is_a7xx(&a6xx_gpu->base)) { + OUT_PKT7(ring, CP_THREAD_CONTROL, 1); + OUT_RING(ring, CP_THREAD_CONTROL_0_SYNC_THREADS | CP_SET_THREAD_BR); + } + /* * And finally, trigger a uche flush to be sure there isn't anything * lingering in that part of the GPU @@ -163,9 +176,11 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, OUT_RING(ring, CP_WAIT_REG_MEM_4_MASK(0x1)); OUT_RING(ring, CP_WAIT_REG_MEM_5_DELAY_LOOP_CYCLES(0)); - /* Re-enable protected mode: */ - OUT_PKT7(ring, CP_SET_PROTECTED_MODE, 1); - OUT_RING(ring, 1); + if (!adreno_is_a7xx(adreno_gpu)) { + /* Re-enable protected mode: */ + OUT_PKT7(ring, CP_SET_PROTECTED_MODE, 1); + OUT_RING(ring, 1); + } } } @@ -252,6 +267,133 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) a6xx_flush(gpu, ring); } +static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) +{ + unsigned int index = submit->seqno % MSM_GPU_SUBMIT_STATS_COUNT; + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); + struct msm_ringbuffer *ring = submit->ring; + unsigned int i, ibs = 0; + + /* + * Toggle concurrent binning for pagetable switch and set the thread to + * BR since only it can execute the pagetable switch packets. + */ + OUT_PKT7(ring, CP_THREAD_CONTROL, 1); + OUT_RING(ring, CP_THREAD_CONTROL_0_SYNC_THREADS | CP_SET_THREAD_BR); + + a6xx_set_pagetable(a6xx_gpu, ring, submit->queue->ctx); + + get_stats_counter(ring, REG_A6XX_RBBM_PERFCTR_CP(0), + rbmemptr_stats(ring, index, cpcycles_start)); + get_stats_counter(ring, REG_A6XX_CP_ALWAYS_ON_COUNTER, + rbmemptr_stats(ring, index, alwayson_start)); + + OUT_PKT7(ring, CP_THREAD_CONTROL, 1); + OUT_RING(ring, CP_SET_THREAD_BOTH); + + OUT_PKT7(ring, CP_SET_MARKER, 1); + OUT_RING(ring, 0x101); /* IFPC disable */ + + OUT_PKT7(ring, CP_SET_MARKER, 1); + OUT_RING(ring, 0x00d); /* IB1LIST start */ + + /* Submit the commands */ + for (i = 0; i < submit->nr_cmds; i++) { + switch (submit->cmd[i].type) { + case MSM_SUBMIT_CMD_IB_TARGET_BUF: + break; + case MSM_SUBMIT_CMD_CTX_RESTORE_BUF: + if (gpu->cur_ctx_seqno == submit->queue->ctx->seqno) + break; + fallthrough; + case MSM_SUBMIT_CMD_BUF: + OUT_PKT7(ring, CP_INDIRECT_BUFFER_PFE, 3); + OUT_RING(ring, lower_32_bits(submit->cmd[i].iova)); + OUT_RING(ring, upper_32_bits(submit->cmd[i].iova)); + OUT_RING(ring, submit->cmd[i].size); + ibs++; + break; + } + + /* + * Periodically update shadow-wptr if needed, so that we + * can see partial progress of submits with large # of + * cmds.. otherwise we could needlessly stall waiting for + * ringbuffer state, simply due to looking at a shadow + * rptr value that has not been updated + */ + if ((ibs % 32) == 0) + update_shadow_rptr(gpu, ring); + } + + OUT_PKT7(ring, CP_SET_MARKER, 1); + OUT_RING(ring, 0x00e); /* IB1LIST end */ + + get_stats_counter(ring, REG_A6XX_RBBM_PERFCTR_CP(0), + rbmemptr_stats(ring, index, cpcycles_end)); + get_stats_counter(ring, REG_A6XX_CP_ALWAYS_ON_COUNTER, + rbmemptr_stats(ring, index, alwayson_end)); + + /* Write the fence to the scratch register */ + OUT_PKT4(ring, REG_A6XX_CP_SCRATCH_REG(2), 1); + OUT_RING(ring, submit->seqno); + + OUT_PKT7(ring, CP_THREAD_CONTROL, 1); + OUT_RING(ring, CP_SET_THREAD_BR); + + OUT_PKT7(ring, CP_EVENT_WRITE, 1); + OUT_RING(ring, CCU_INVALIDATE_DEPTH); + + OUT_PKT7(ring, CP_EVENT_WRITE, 1); + OUT_RING(ring, CCU_INVALIDATE_COLOR); + + OUT_PKT7(ring, CP_THREAD_CONTROL, 1); + OUT_RING(ring, CP_SET_THREAD_BV); + + /* + * Make sure the timestamp is committed once BV pipe is + * completely done with this submission. + */ + OUT_PKT7(ring, CP_EVENT_WRITE, 4); + OUT_RING(ring, CACHE_CLEAN | BIT(27)); + OUT_RING(ring, lower_32_bits(rbmemptr(ring, bv_fence))); + OUT_RING(ring, upper_32_bits(rbmemptr(ring, bv_fence))); + OUT_RING(ring, submit->seqno); + + OUT_PKT7(ring, CP_THREAD_CONTROL, 1); + OUT_RING(ring, CP_SET_THREAD_BR); + + /* + * This makes sure that BR doesn't race ahead and commit + * timestamp to memstore while BV is still processing + * this submission. + */ + OUT_PKT7(ring, CP_WAIT_TIMESTAMP, 4); + OUT_RING(ring, 0); + OUT_RING(ring, lower_32_bits(rbmemptr(ring, bv_fence))); + OUT_RING(ring, upper_32_bits(rbmemptr(ring, bv_fence))); + OUT_RING(ring, submit->seqno); + + /* write the ringbuffer timestamp */ + OUT_PKT7(ring, CP_EVENT_WRITE, 4); + OUT_RING(ring, CACHE_CLEAN | CP_EVENT_WRITE_0_IRQ | BIT(27)); + OUT_RING(ring, lower_32_bits(rbmemptr(ring, fence))); + OUT_RING(ring, upper_32_bits(rbmemptr(ring, fence))); + OUT_RING(ring, submit->seqno); + + OUT_PKT7(ring, CP_THREAD_CONTROL, 1); + OUT_RING(ring, CP_SET_THREAD_BOTH); + + OUT_PKT7(ring, CP_SET_MARKER, 1); + OUT_RING(ring, 0x100); /* IFPC enable */ + + trace_msm_gpu_submit_flush(submit, + gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER)); + + a6xx_flush(gpu, ring); +} + const struct adreno_reglist a612_hwcg[] = { {REG_A6XX_RBBM_CLOCK_CNTL_SP0, 0x22222222}, {REG_A6XX_RBBM_CLOCK_CNTL2_SP0, 0x02222220}, @@ -714,6 +856,15 @@ static void a6xx_set_hwcg(struct msm_gpu *gpu, bool state) else clock_cntl_on = 0x8aa8aa82; + if (adreno_is_a7xx(adreno_gpu)) { + gmu_write(&a6xx_gpu->gmu, REG_A6XX_GPU_GMU_AO_GMU_CGC_MODE_CNTL, + state ? 0x20000 : 0); + gmu_write(&a6xx_gpu->gmu, REG_A6XX_GPU_GMU_AO_GMU_CGC_DELAY_CNTL, + state ? 0x10111 : 0); + gmu_write(&a6xx_gpu->gmu, REG_A6XX_GPU_GMU_AO_GMU_CGC_HYST_CNTL, + state ? 0x5555 : 0); + } + val = gpu_read(gpu, REG_A6XX_RBBM_CLOCK_CNTL); /* Don't re-program the registers if they are already correct */ @@ -721,14 +872,14 @@ static void a6xx_set_hwcg(struct msm_gpu *gpu, bool state) return; /* Disable SP clock before programming HWCG registers */ - if (!adreno_is_a610(adreno_gpu)) + if (!adreno_is_a610(adreno_gpu) && !adreno_is_a7xx(adreno_gpu)) gmu_rmw(gmu, REG_A6XX_GPU_GMU_GX_SPTPRAC_CLOCK_CONTROL, 1, 0); for (i = 0; (reg = &adreno_gpu->info->hwcg[i], reg->offset); i++) gpu_write(gpu, reg->offset, state ? reg->value : 0); /* Enable SP clock */ - if (!adreno_is_a610(adreno_gpu)) + if (!adreno_is_a610(adreno_gpu) && !adreno_is_a7xx(adreno_gpu)) gmu_rmw(gmu, REG_A6XX_GPU_GMU_GX_SPTPRAC_CLOCK_CONTROL, 0, 1); gpu_write(gpu, REG_A6XX_RBBM_CLOCK_CNTL, state ? clock_cntl_on : 0); @@ -1017,6 +1168,10 @@ static void a6xx_set_ubwc_config(struct msm_gpu *gpu) uavflagprd_inv << 4 | min_acc_len << 3 | hbb_lo << 1 | ubwc_mode); + if (adreno_is_a7xx(adreno_gpu)) + gpu_write(gpu, REG_A7XX_GRAS_NC_MODE_CNTL, + FIELD_PREP(GENMASK(8, 5), hbb_lo)); + gpu_write(gpu, REG_A6XX_UCHE_MODE_CNTL, min_acc_len << 23 | hbb_lo << 21); } @@ -1049,6 +1204,55 @@ static int a6xx_cp_init(struct msm_gpu *gpu) return a6xx_idle(gpu, ring) ? 0 : -EINVAL; } +static int a7xx_cp_init(struct msm_gpu *gpu) +{ + struct msm_ringbuffer *ring = gpu->rb[0]; + u32 mask; + + /* Disable concurrent binning before sending CP init */ + OUT_PKT7(ring, CP_THREAD_CONTROL, 1); + OUT_RING(ring, BIT(27)); + + OUT_PKT7(ring, CP_ME_INIT, 7); + + /* Use multiple HW contexts */ + mask = BIT(0); + + /* Enable error detection */ + mask |= BIT(1); + + /* Set default reset state */ + mask |= BIT(3); + + /* Disable save/restore of performance counters across preemption */ + mask |= BIT(6); + + /* Enable the register init list with the spinlock */ + mask |= BIT(8); + + OUT_RING(ring, mask); + + /* Enable multiple hardware contexts */ + OUT_RING(ring, 0x00000003); + + /* Enable error detection */ + OUT_RING(ring, 0x20000000); + + /* Operation mode mask */ + OUT_RING(ring, 0x00000002); + + /* *Don't* send a power up reg list for concurrent binning (TODO) */ + /* Lo address */ + OUT_RING(ring, 0x00000000); + /* Hi address */ + OUT_RING(ring, 0x00000000); + /* BIT(31) set => read the regs from the list */ + OUT_RING(ring, 0x00000000); + + a6xx_flush(gpu, ring); + return a6xx_idle(gpu, ring) ? 0 : -EINVAL; +} + /* * Check that the microcode version is new enough to include several key * security fixes. Return true if the ucode is safe. @@ -1065,6 +1269,10 @@ static bool a6xx_ucode_check_version(struct a6xx_gpu *a6xx_gpu, if (IS_ERR(buf)) return false; + /* A7xx is safe! */ + if (adreno_is_a7xx(adreno_gpu)) + return true; + /* * Targets up to a640 (a618, a630 and a640) need to check for a * microcode version that is patched to support the whereami opcode or @@ -1181,16 +1389,39 @@ static int a6xx_zap_shader_init(struct msm_gpu *gpu) } #define A6XX_INT_MASK (A6XX_RBBM_INT_0_MASK_CP_AHB_ERROR | \ - A6XX_RBBM_INT_0_MASK_RBBM_ATB_ASYNCFIFO_OVERFLOW | \ - A6XX_RBBM_INT_0_MASK_CP_HW_ERROR | \ - A6XX_RBBM_INT_0_MASK_CP_IB2 | \ - A6XX_RBBM_INT_0_MASK_CP_IB1 | \ - A6XX_RBBM_INT_0_MASK_CP_RB | \ - A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS | \ - A6XX_RBBM_INT_0_MASK_RBBM_ATB_BUS_OVERFLOW | \ - A6XX_RBBM_INT_0_MASK_RBBM_HANG_DETECT | \ - A6XX_RBBM_INT_0_MASK_UCHE_OOB_ACCESS | \ - A6XX_RBBM_INT_0_MASK_UCHE_TRAP_INTR) + A6XX_RBBM_INT_0_MASK_RBBM_ATB_ASYNCFIFO_OVERFLOW | \ + A6XX_RBBM_INT_0_MASK_CP_HW_ERROR | \ + A6XX_RBBM_INT_0_MASK_CP_IB2 | \ + A6XX_RBBM_INT_0_MASK_CP_IB1 | \ + A6XX_RBBM_INT_0_MASK_CP_RB | \ + A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS | \ + A6XX_RBBM_INT_0_MASK_RBBM_ATB_BUS_OVERFLOW | \ + A6XX_RBBM_INT_0_MASK_RBBM_HANG_DETECT | \ + A6XX_RBBM_INT_0_MASK_UCHE_OOB_ACCESS | \ + A6XX_RBBM_INT_0_MASK_UCHE_TRAP_INTR) + +#define A7XX_INT_MASK (A6XX_RBBM_INT_0_MASK_CP_AHB_ERROR | \ + A6XX_RBBM_INT_0_MASK_RBBM_ATB_ASYNCFIFO_OVERFLOW | \ + A6XX_RBBM_INT_0_MASK_RBBM_GPC_ERROR | \ + A6XX_RBBM_INT_0_MASK_CP_SW | \ + A6XX_RBBM_INT_0_MASK_CP_HW_ERROR | \ + A6XX_RBBM_INT_0_MASK_PM4CPINTERRUPT | \ + A6XX_RBBM_INT_0_MASK_CP_RB_DONE_TS | \ + A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS | \ + A6XX_RBBM_INT_0_MASK_RBBM_ATB_BUS_OVERFLOW | \ + A6XX_RBBM_INT_0_MASK_RBBM_HANG_DETECT | \ + A6XX_RBBM_INT_0_MASK_UCHE_OOB_ACCESS | \ + A6XX_RBBM_INT_0_MASK_UCHE_TRAP_INTR | \ + A6XX_RBBM_INT_0_MASK_TSBWRITEERROR) + +#define A7XX_APRIV_MASK (A6XX_CP_APRIV_CNTL_ICACHE | \ + A6XX_CP_APRIV_CNTL_RBFETCH | \ + A6XX_CP_APRIV_CNTL_RBPRIVLEVEL | \ + A6XX_CP_APRIV_CNTL_RBRPWB) + +#define A7XX_BR_APRIVMASK (A7XX_APRIV_MASK | \ + A6XX_CP_APRIV_CNTL_CDREAD | \ + A6XX_CP_APRIV_CNTL_CDWRITE) static int hw_init(struct msm_gpu *gpu) { @@ -1232,19 +1463,21 @@ static int hw_init(struct msm_gpu *gpu) gpu_write64(gpu, REG_A6XX_RBBM_SECVID_TSB_TRUSTED_BASE, 0x00000000); gpu_write(gpu, REG_A6XX_RBBM_SECVID_TSB_TRUSTED_SIZE, 0x00000000); - /* Turn on 64 bit addressing for all blocks */ - gpu_write(gpu, REG_A6XX_CP_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_VSC_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_GRAS_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_RB_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_PC_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_HLSQ_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_VFD_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_VPC_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_UCHE_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_SP_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_TPL1_ADDR_MODE_CNTL, 0x1); - gpu_write(gpu, REG_A6XX_RBBM_SECVID_TSB_ADDR_MODE_CNTL, 0x1); + if (!adreno_is_a7xx(adreno_gpu)) { + /* Turn on 64 bit addressing for all blocks */ + gpu_write(gpu, REG_A6XX_CP_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_VSC_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_GRAS_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_RB_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_PC_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_HLSQ_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_VFD_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_VPC_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_UCHE_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_SP_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_TPL1_ADDR_MODE_CNTL, 0x1); + gpu_write(gpu, REG_A6XX_RBBM_SECVID_TSB_ADDR_MODE_CNTL, 0x1); + } /* enable hardware clockgating */ a6xx_set_hwcg(gpu, true); @@ -1252,12 +1485,14 @@ static int hw_init(struct msm_gpu *gpu) /* VBIF/GBIF start*/ if (adreno_is_a610(adreno_gpu) || adreno_is_a640_family(adreno_gpu) || - adreno_is_a650_family(adreno_gpu)) { + adreno_is_a650_family(adreno_gpu) || + adreno_is_a7xx(adreno_gpu)) { gpu_write(gpu, REG_A6XX_GBIF_QSB_SIDE0, 0x00071620); gpu_write(gpu, REG_A6XX_GBIF_QSB_SIDE1, 0x00071620); gpu_write(gpu, REG_A6XX_GBIF_QSB_SIDE2, 0x00071620); gpu_write(gpu, REG_A6XX_GBIF_QSB_SIDE3, 0x00071620); - gpu_write(gpu, REG_A6XX_RBBM_GBIF_CLIENT_QOS_CNTL, 0x3); + gpu_write(gpu, REG_A6XX_RBBM_GBIF_CLIENT_QOS_CNTL, + adreno_is_a7xx(adreno_gpu) ? 0x2120212 : 0x3); } else { gpu_write(gpu, REG_A6XX_RBBM_VBIF_CLIENT_QOS_CNTL, 0x3); } @@ -1265,13 +1500,21 @@ static int hw_init(struct msm_gpu *gpu) if (adreno_is_a630(adreno_gpu)) gpu_write(gpu, REG_A6XX_VBIF_GATE_OFF_WRREQ_EN, 0x00000009); + if (adreno_is_a7xx(adreno_gpu)) + gpu_write(gpu, REG_A6XX_UCHE_GBIF_GX_CONFIG, 0x10240e0); + /* Make all blocks contribute to the GPU BUSY perf counter */ gpu_write(gpu, REG_A6XX_RBBM_PERFCTR_GPU_BUSY_MASKED, 0xffffffff); /* Disable L2 bypass in the UCHE */ - gpu_write64(gpu, REG_A6XX_UCHE_WRITE_RANGE_MAX, 0x0001ffffffffffc0llu); - gpu_write64(gpu, REG_A6XX_UCHE_TRAP_BASE, 0x0001fffffffff000llu); - gpu_write64(gpu, REG_A6XX_UCHE_WRITE_THRU_BASE, 0x0001fffffffff000llu); + if (adreno_is_a7xx(adreno_gpu)) { + gpu_write64(gpu, REG_A6XX_UCHE_TRAP_BASE, 0x0001fffffffff000llu); + gpu_write64(gpu, REG_A6XX_UCHE_WRITE_THRU_BASE, 0x0001fffffffff000llu); + } else { + gpu_write64(gpu, REG_A6XX_UCHE_WRITE_RANGE_MAX, 0x0001ffffffffffc0llu); + gpu_write64(gpu, REG_A6XX_UCHE_TRAP_BASE, 0x0001fffffffff000llu); + gpu_write64(gpu, REG_A6XX_UCHE_WRITE_THRU_BASE, 0x0001fffffffff000llu); + } if (!adreno_is_a650_family(adreno_gpu)) { /* Set the GMEM VA range [0x100000:0x100000 + gpu->gmem - 1] */ @@ -1281,8 +1524,12 @@ static int hw_init(struct msm_gpu *gpu) 0x00100000 + adreno_gpu->info->gmem - 1); } - gpu_write(gpu, REG_A6XX_UCHE_FILTER_CNTL, 0x804); - gpu_write(gpu, REG_A6XX_UCHE_CACHE_WAYS, 0x4); + if (adreno_is_a7xx(adreno_gpu)) + gpu_write(gpu, REG_A6XX_UCHE_CACHE_WAYS, BIT(23)); + else { + gpu_write(gpu, REG_A6XX_UCHE_FILTER_CNTL, 0x804); + gpu_write(gpu, REG_A6XX_UCHE_CACHE_WAYS, 0x4); + } if (adreno_is_a640_family(adreno_gpu) || adreno_is_a650_family(adreno_gpu)) { gpu_write(gpu, REG_A6XX_CP_ROQ_THRESHOLDS_2, 0x02000140); @@ -1290,7 +1537,7 @@ static int hw_init(struct msm_gpu *gpu) } else if (adreno_is_a610(adreno_gpu)) { gpu_write(gpu, REG_A6XX_CP_ROQ_THRESHOLDS_2, 0x00800060); gpu_write(gpu, REG_A6XX_CP_ROQ_THRESHOLDS_1, 0x40201b16); - } else { + } else if (!adreno_is_a7xx(adreno_gpu)) { gpu_write(gpu, REG_A6XX_CP_ROQ_THRESHOLDS_2, 0x010000c0); gpu_write(gpu, REG_A6XX_CP_ROQ_THRESHOLDS_1, 0x8040362c); } @@ -1302,7 +1549,7 @@ static int hw_init(struct msm_gpu *gpu) if (adreno_is_a610(adreno_gpu)) { gpu_write(gpu, REG_A6XX_CP_MEM_POOL_SIZE, 48); gpu_write(gpu, REG_A6XX_CP_MEM_POOL_DBG_ADDR, 47); - } else + } else if (!adreno_is_a7xx(adreno_gpu)) gpu_write(gpu, REG_A6XX_CP_MEM_POOL_SIZE, 128); /* Setting the primFifo thresholds default values, @@ -1318,7 +1565,7 @@ static int hw_init(struct msm_gpu *gpu) gpu_write(gpu, REG_A6XX_PC_DBG_ECO_CNTL, 0x00018000); else if (adreno_is_a610(adreno_gpu)) gpu_write(gpu, REG_A6XX_PC_DBG_ECO_CNTL, 0x00080000); - else + else if (!adreno_is_a7xx(adreno_gpu)) gpu_write(gpu, REG_A6XX_PC_DBG_ECO_CNTL, 0x00180000); /* Set the AHB default slave response to "ERROR" */ @@ -1327,6 +1574,12 @@ static int hw_init(struct msm_gpu *gpu) /* Turn on performance counters */ gpu_write(gpu, REG_A6XX_RBBM_PERFCTR_CNTL, 0x1); + if (adreno_is_a7xx(adreno_gpu)) { + /* Turn on the IFPC counter (countable 4 on XOCLK4) */ + gmu_write(&a6xx_gpu->gmu, REG_A6XX_GMU_CX_GMU_POWER_COUNTER_SELECT_1, + FIELD_PREP(GENMASK(7, 0), 0x4)); + } + /* Select CP0 to always count cycles */ gpu_write(gpu, REG_A6XX_CP_PERFCTR_CP_SEL(0), PERF_CP_ALWAYS_COUNT); @@ -1373,15 +1626,31 @@ static int hw_init(struct msm_gpu *gpu) /* Set dualQ + disable afull for A660 GPU */ if (adreno_is_a660(adreno_gpu)) gpu_write(gpu, REG_A6XX_UCHE_CMDQ_CONFIG, 0x66906); + else if (adreno_is_a7xx(adreno_gpu)) + gpu_write(gpu, REG_A6XX_UCHE_CMDQ_CONFIG, + FIELD_PREP(GENMASK(19, 16), 6) | + FIELD_PREP(GENMASK(15, 12), 6) | + FIELD_PREP(GENMASK(11, 8), 9) | + BIT(3) | BIT(2) | + FIELD_PREP(GENMASK(1, 0), 2)); /* Enable expanded apriv for targets that support it */ if (gpu->hw_apriv) { - gpu_write(gpu, REG_A6XX_CP_APRIV_CNTL, - (1 << 6) | (1 << 5) | (1 << 3) | (1 << 2) | (1 << 1)); + if (adreno_is_a7xx(adreno_gpu)) { + gpu_write(gpu, REG_A6XX_CP_APRIV_CNTL, + A7XX_BR_APRIVMASK); + gpu_write(gpu, REG_A7XX_CP_BV_APRIV_CNTL, + A7XX_APRIV_MASK); + gpu_write(gpu, REG_A7XX_CP_LPAC_APRIV_CNTL, + A7XX_APRIV_MASK); + } else + gpu_write(gpu, REG_A6XX_CP_APRIV_CNTL, + BIT(6) | BIT(5) | BIT(3) | BIT(2) | BIT(1)); } /* Enable interrupts */ - gpu_write(gpu, REG_A6XX_RBBM_INT_0_MASK, A6XX_INT_MASK); + gpu_write(gpu, REG_A6XX_RBBM_INT_0_MASK, + adreno_is_a7xx(adreno_gpu) ? A7XX_INT_MASK : A6XX_INT_MASK); ret = adreno_hw_init(gpu); if (ret) @@ -1408,6 +1677,12 @@ static int hw_init(struct msm_gpu *gpu) shadowptr(a6xx_gpu, gpu->rb[0])); } + /* ..which means "always" on A7xx, also for BV shadow */ + if (adreno_is_a7xx(adreno_gpu)) { + gpu_write64(gpu, REG_A7XX_CP_BV_RB_RPTR_ADDR, + rbmemptr(gpu->rb[0], bv_fence)); + } + /* Always come up on rb 0 */ a6xx_gpu->cur_ring = gpu->rb[0]; @@ -1416,7 +1691,7 @@ static int hw_init(struct msm_gpu *gpu) /* Enable the SQE_to start the CP engine */ gpu_write(gpu, REG_A6XX_CP_SQE_CNTL, 1); - ret = a6xx_cp_init(gpu); + ret = adreno_is_a7xx(adreno_gpu) ? a7xx_cp_init(gpu) : a6xx_cp_init(gpu); if (ret) goto out; @@ -1653,7 +1928,7 @@ static void a6xx_cp_hw_err_irq(struct msm_gpu *gpu) (val & 0x3ffff), val); } - if (status & A6XX_CP_INT_CP_AHB_ERROR) + if (status & A6XX_CP_INT_CP_AHB_ERROR && !adreno_is_a7xx(to_adreno_gpu(gpu))) dev_err_ratelimited(&gpu->pdev->dev, "CP AHB error interrupt\n"); if (status & A6XX_CP_INT_CP_VSD_PARITY_ERROR) @@ -1803,6 +2078,35 @@ static void a6xx_llc_activate(struct a6xx_gpu *a6xx_gpu) gpu_rmw(gpu, REG_A6XX_GBIF_SCACHE_CNTL1, GENMASK(24, 0), cntl1_regval); } +static void a7xx_llc_activate(struct a6xx_gpu *a6xx_gpu) +{ + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; + struct msm_gpu *gpu = &adreno_gpu->base; + + if (IS_ERR(a6xx_gpu->llc_mmio)) + return; + + if (!llcc_slice_activate(a6xx_gpu->llc_slice)) { + u32 gpu_scid = llcc_get_slice_id(a6xx_gpu->llc_slice); + + gpu_scid &= GENMASK(4, 0); + + gpu_write(gpu, REG_A6XX_GBIF_SCACHE_CNTL1, + FIELD_PREP(GENMASK(29, 25), gpu_scid) | + FIELD_PREP(GENMASK(24, 20), gpu_scid) | + FIELD_PREP(GENMASK(19, 15), gpu_scid) | + FIELD_PREP(GENMASK(14, 10), gpu_scid) | + FIELD_PREP(GENMASK(9, 5), gpu_scid) | + FIELD_PREP(GENMASK(4, 0), gpu_scid)); + + gpu_write(gpu, REG_A6XX_GBIF_SCACHE_CNTL0, + FIELD_PREP(GENMASK(14, 10), gpu_scid) | + BIT(8)); + } + + llcc_slice_activate(a6xx_gpu->htw_llc_slice); +} + static void a6xx_llc_slices_destroy(struct a6xx_gpu *a6xx_gpu) { /* No LLCC on non-RPMh (and by extension, non-GMU) SoCs */ @@ -1814,7 +2118,7 @@ static void a6xx_llc_slices_destroy(struct a6xx_gpu *a6xx_gpu) } static void a6xx_llc_slices_init(struct platform_device *pdev, - struct a6xx_gpu *a6xx_gpu) + struct a6xx_gpu *a6xx_gpu, bool is_a7xx) { struct device_node *phandle; @@ -1823,18 +2127,18 @@ static void a6xx_llc_slices_init(struct platform_device *pdev, return; /* - * There is a different programming path for targets with an mmu500 - * attached, so detect if that is the case + * There is a different programming path for A6xx targets with an + * mmu500 attached, so detect if that is the case */ phandle = of_parse_phandle(pdev->dev.of_node, "iommus", 0); a6xx_gpu->have_mmu500 = (phandle && of_device_is_compatible(phandle, "arm,mmu-500")); of_node_put(phandle); - if (a6xx_gpu->have_mmu500) - a6xx_gpu->llc_mmio = NULL; - else + if (is_a7xx || !a6xx_gpu->have_mmu500) a6xx_gpu->llc_mmio = msm_ioremap(pdev, "cx_mem"); + else + a6xx_gpu->llc_mmio = NULL; a6xx_gpu->llc_slice = llcc_slice_getd(LLCC_GPU); a6xx_gpu->htw_llc_slice = llcc_slice_getd(LLCC_GPUHTW); @@ -1920,7 +2224,7 @@ static int a6xx_gmu_pm_resume(struct msm_gpu *gpu) msm_devfreq_resume(gpu); - a6xx_llc_activate(a6xx_gpu); + adreno_is_a7xx(adreno_gpu) ? a7xx_llc_activate : a6xx_llc_activate(a6xx_gpu); return ret; } @@ -2307,6 +2611,37 @@ static const struct adreno_gpu_funcs funcs_gmuwrapper = { .get_timestamp = a6xx_get_timestamp, }; +static const struct adreno_gpu_funcs funcs_a7xx = { + .base = { + .get_param = adreno_get_param, + .set_param = adreno_set_param, + .hw_init = a6xx_hw_init, + .ucode_load = a6xx_ucode_load, + .pm_suspend = a6xx_gmu_pm_suspend, + .pm_resume = a6xx_gmu_pm_resume, + .recover = a6xx_recover, + .submit = a7xx_submit, + .active_ring = a6xx_active_ring, + .irq = a6xx_irq, + .destroy = a6xx_destroy, +#if defined(CONFIG_DRM_MSM_GPU_STATE) + .show = a6xx_show, +#endif + .gpu_busy = a6xx_gpu_busy, + .gpu_get_freq = a6xx_gmu_get_freq, + .gpu_set_freq = a6xx_gpu_set_freq, +#if defined(CONFIG_DRM_MSM_GPU_STATE) + .gpu_state_get = a6xx_gpu_state_get, + .gpu_state_put = a6xx_gpu_state_put, +#endif + .create_address_space = a6xx_create_address_space, + .create_private_address_space = a6xx_create_private_address_space, + .get_rptr = a6xx_get_rptr, + .progress = a6xx_progress, + }, + .get_timestamp = a6xx_gmu_get_timestamp, +}; + struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) { struct msm_drm_private *priv = dev->dev_private; @@ -2316,6 +2651,7 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) struct a6xx_gpu *a6xx_gpu; struct adreno_gpu *adreno_gpu; struct msm_gpu *gpu; + bool is_a7xx; int ret; a6xx_gpu = kzalloc(sizeof(*a6xx_gpu), GFP_KERNEL); @@ -2339,7 +2675,10 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) adreno_gpu->base.hw_apriv = !!(config->info->quirks & ADRENO_QUIRK_HAS_HW_APRIV); - a6xx_llc_slices_init(pdev, a6xx_gpu); + /* gpu->info only gets assigned in adreno_gpu_init() */ + is_a7xx = config->info->family == ADRENO_7XX_GEN1; + + a6xx_llc_slices_init(pdev, a6xx_gpu, is_a7xx); ret = a6xx_set_supported_hw(&pdev->dev, config->info); if (ret) { @@ -2347,7 +2686,9 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } - if (adreno_has_gmu_wrapper(adreno_gpu)) + if (is_a7xx) + ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs_a7xx, 1); + else if (adreno_has_gmu_wrapper(adreno_gpu)) ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs_gmuwrapper, 1); else ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, 1); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 8090dde03280..ea59724f8e41 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -567,6 +567,7 @@ int adreno_hw_init(struct msm_gpu *gpu) ring->cur = ring->start; ring->next = ring->start; ring->memptrs->rptr = 0; + ring->memptrs->bv_fence = ring->fctx->completed_fence; /* Detect and clean up an impossible fence, ie. if GPU managed * to scribble something invalid, we don't want that to confuse diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index 49f38edf9854..ad07e788a69c 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -46,6 +46,7 @@ enum adreno_family { ADRENO_6XX_GEN2, /* a640 family */ ADRENO_6XX_GEN3, /* a650 family */ ADRENO_6XX_GEN4, /* a660 family */ + ADRENO_7XX_GEN1, /* a730 family */ }; #define ADRENO_QUIRK_TWO_PASS_USE_WFI BIT(0) @@ -391,7 +392,8 @@ static inline int adreno_is_a650_family(const struct adreno_gpu *gpu) { if (WARN_ON_ONCE(!gpu->info)) return false; - return gpu->info->family >= ADRENO_6XX_GEN3; + return gpu->info->family == ADRENO_6XX_GEN3 || + adreno_is_a660_family(gpu); } static inline int adreno_is_a640_family(const struct adreno_gpu *gpu) @@ -401,6 +403,12 @@ static inline int adreno_is_a640_family(const struct adreno_gpu *gpu) return gpu->info->family == ADRENO_6XX_GEN2; } +static inline int adreno_is_a7xx(struct adreno_gpu *gpu) +{ + /* Update with non-fake (i.e. non-A702) Gen 7 GPUs */ + return gpu->info->family == ADRENO_7XX_GEN1; +} + u64 adreno_private_address_space_size(struct msm_gpu *gpu); int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, uint32_t param, uint64_t *value, uint32_t *len); diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h index 698b333abccd..0d6beb8cd39a 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.h +++ b/drivers/gpu/drm/msm/msm_ringbuffer.h @@ -30,6 +30,8 @@ struct msm_gpu_submit_stats { struct msm_rbmemptrs { volatile uint32_t rptr; volatile uint32_t fence; + /* Introduced on A7xx */ + volatile uint32_t bv_fence; volatile struct msm_gpu_submit_stats stats[MSM_GPU_SUBMIT_STATS_COUNT]; volatile u64 ttbr0; From patchwork Wed Aug 23 12:56:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Dybcio X-Patchwork-Id: 716261 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF784EE49B7 for ; Wed, 23 Aug 2023 12:56:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235338AbjHWM40 (ORCPT ); Wed, 23 Aug 2023 08:56:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50928 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235355AbjHWM4Z (ORCPT ); Wed, 23 Aug 2023 08:56:25 -0400 Received: from mail-lj1-x229.google.com (mail-lj1-x229.google.com [IPv6:2a00:1450:4864:20::229]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1902A10D9 for ; Wed, 23 Aug 2023 05:56:13 -0700 (PDT) Received: by mail-lj1-x229.google.com with SMTP id 38308e7fff4ca-2bcbfb3705dso46679501fa.1 for ; Wed, 23 Aug 2023 05:56:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1692795371; x=1693400171; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=wRuhYFmeJYeOhpdGvidJgpSBYRlHbBQ/NQfQGIIKiSI=; b=JGAobr9R/mZmR9IOxXPUrVlju0EMjzr5pmrm1flDb3Xt3LCwDmS3PChizrBx/jkjPp w8NZCCdy9isEHTAG7wSTqMh1WzNvY+PCzuu4TsZ5phQvqtTIL01DPLCplYq5tfSkKNri 0py6C0oxGlg/dhTP3MW2g7xTh1/Az2WOt1RD2Gv8GDsPSAd2Qor8/F15Ok2qK7aM4nVb wyBawbSE5XyVuTSg35kI+VVw7IFlPfRZ/0DtZwM58akBtJzfveBjMA8I9rqZEraB8LOs d8X0ZnHbrHLrLwXRufOFfbRldnkJhMeWyKcVnCHvAQWSaMYVTAsW9p8/KBj/MDKT6oeP OvIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692795371; x=1693400171; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wRuhYFmeJYeOhpdGvidJgpSBYRlHbBQ/NQfQGIIKiSI=; b=DrRM8ZCd1bFji73cvDgy+uJj+uFalD2JFgnBJ8rXNzKWN6FpXrxUjFj5uSfuNRizuG SK2AwgWzfV1QWYMv//6nRPntoZ4nSd10v6kAgO1rQnzhGhWNG7gciLEKSfFWSdedKgFN zpq9R2uvhiAqxcsFzO471cu442hpr5QEpmpSBjVQ7dgaUyNA+sjYmeTsnWvuCL/bNM/t FlA4Fk2N+yR8kyR6+KZhRZiAkkZsewQPVhbd3YFFmE6TohrwKxKyX939GasaHDipryLn gEhT68zCPNLsMwbpWN1FVMRTM4s2ZblSd+r6a4sykbyv+cCjaaKgvs0YQ4mVgLGoH4K9 dGYA== X-Gm-Message-State: AOJu0Yz/6CXTvXyPrc/lL/fm9c0AxUXmzqpcDd4CADLsv4o6kASvzLZL VTYQ/IbQ7MudQoqhPl9/ZWCYEg== X-Google-Smtp-Source: AGHT+IEXLWi8l+6IlS5wqCtLyi++/5V0DFQvw2sJROkN1ZVFUAEeCJrXfdYMw+AC4MtUyQqUl5ficQ== X-Received: by 2002:a2e:8883:0:b0:2b7:764:3caf with SMTP id k3-20020a2e8883000000b002b707643cafmr8755123lji.10.1692795370799; Wed, 23 Aug 2023 05:56:10 -0700 (PDT) Received: from [192.168.1.101] (abyj76.neoplus.adsl.tpnet.pl. [83.9.29.76]) by smtp.gmail.com with ESMTPSA id a18-20020a05651c011200b002b6db0ed72fsm3220256ljb.48.2023.08.23.05.56.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Aug 2023 05:56:10 -0700 (PDT) From: Konrad Dybcio Date: Wed, 23 Aug 2023 14:56:00 +0200 Subject: [PATCH v3 07/10] drm/msm/a6xx: Mostly implement A7xx gpu_state MIME-Version: 1.0 Message-Id: <20230628-topic-a7xx_drmmsm-v3-7-4ee67ccbaf9d@linaro.org> References: <20230628-topic-a7xx_drmmsm-v3-0-4ee67ccbaf9d@linaro.org> In-Reply-To: <20230628-topic-a7xx_drmmsm-v3-0-4ee67ccbaf9d@linaro.org> To: Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Bjorn Andersson Cc: Marijn Suijten , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, Konrad Dybcio , Neil Armstrong X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1692795358; l=7067; i=konrad.dybcio@linaro.org; s=20230215; h=from:subject:message-id; bh=6MV56+TNUPaIc6xCukG1x1SkhHcWoULQWAbjzRARu+w=; b=iGwc+jJLe0+hFFuu15vHCxtufPQ9S09KOJIL/TXr3Kg6vv4AOhHONTR33IctLFF2BwvQQJ0yX XCku5DnBBoXAsXoimXmtYUGrRRdPBfeVUvFpk+DBVCfFV5emBD4Z/p/ X-Developer-Key: i=konrad.dybcio@linaro.org; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Provide the necessary alternations to mostly support state dumping on A7xx. Newer GPUs will probably require more changes here. Crashdumper and debugbus remain untested. Tested-by: Neil Armstrong # on SM8550-QRD Tested-by: Dmitry Baryshkov # sm8450 Signed-off-by: Konrad Dybcio --- drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c | 52 +++++++++++++++++++++++- drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h | 61 ++++++++++++++++++++++++++++- 2 files changed, 110 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c index 4e5d650578c6..18be2d3bde09 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c @@ -948,6 +948,18 @@ static u32 a6xx_get_cp_roq_size(struct msm_gpu *gpu) return gpu_read(gpu, REG_A6XX_CP_ROQ_THRESHOLDS_2) >> 14; } +static u32 a7xx_get_cp_roq_size(struct msm_gpu *gpu) +{ + /* + * The value at CP_ROQ_THRESHOLDS_2[20:31] is in 4dword units. + * That register however is not directly accessible from APSS on A7xx. + * Program the SQE_UCODE_DBG_ADDR with offset=0x70d3 and read the value. + */ + gpu_write(gpu, REG_A6XX_CP_SQE_UCODE_DBG_ADDR, 0x70d3); + + return 4 * (gpu_read(gpu, REG_A6XX_CP_SQE_UCODE_DBG_DATA) >> 20); +} + /* Read a block of data from an indexed register pair */ static void a6xx_get_indexed_regs(struct msm_gpu *gpu, struct a6xx_gpu_state *a6xx_state, @@ -1019,8 +1031,40 @@ static void a6xx_get_indexed_registers(struct msm_gpu *gpu, /* Restore the size in the hardware */ gpu_write(gpu, REG_A6XX_CP_MEM_POOL_SIZE, mempool_size); +} + +static void a7xx_get_indexed_registers(struct msm_gpu *gpu, + struct a6xx_gpu_state *a6xx_state) +{ + int i, indexed_count, mempool_count; + + indexed_count = ARRAY_SIZE(a7xx_indexed_reglist); + mempool_count = ARRAY_SIZE(a7xx_cp_bv_mempool_indexed); - a6xx_state->nr_indexed_regs = count; + a6xx_state->indexed_regs = state_kcalloc(a6xx_state, + indexed_count + mempool_count, + sizeof(*a6xx_state->indexed_regs)); + if (!a6xx_state->indexed_regs) + return; + + a6xx_state->nr_indexed_regs = indexed_count + mempool_count; + + /* First read the common regs */ + for (i = 0; i < indexed_count; i++) + a6xx_get_indexed_regs(gpu, a6xx_state, &a7xx_indexed_reglist[i], + &a6xx_state->indexed_regs[i]); + + gpu_rmw(gpu, REG_A6XX_CP_CHICKEN_DBG, 0, BIT(2)); + gpu_rmw(gpu, REG_A7XX_CP_BV_CHICKEN_DBG, 0, BIT(2)); + + /* Get the contents of the CP_BV mempool */ + for (i = 0; i < mempool_count; i++) + a6xx_get_indexed_regs(gpu, a6xx_state, a7xx_cp_bv_mempool_indexed, + &a6xx_state->indexed_regs[indexed_count - 1 + i]); + + gpu_rmw(gpu, REG_A6XX_CP_CHICKEN_DBG, BIT(2), 0); + gpu_rmw(gpu, REG_A7XX_CP_BV_CHICKEN_DBG, BIT(2), 0); + return; } struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu) @@ -1056,6 +1100,12 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu) return &a6xx_state->base; /* Get the banks of indexed registers */ + if (adreno_is_a7xx(adreno_gpu)) { + a7xx_get_indexed_registers(gpu, a6xx_state); + /* Further codeflow is untested on A7xx. */ + return &a6xx_state->base; + } + a6xx_get_indexed_registers(gpu, a6xx_state); /* diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h index e788ed72eb0d..8d7e6f26480a 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h @@ -338,6 +338,28 @@ static const struct a6xx_registers a6xx_vbif_reglist = static const struct a6xx_registers a6xx_gbif_reglist = REGS(a6xx_gbif_registers, 0, 0); +static const u32 a7xx_ahb_registers[] = { + /* RBBM_STATUS */ + 0x210, 0x210, + /* RBBM_STATUS2-3 */ + 0x212, 0x213, +}; + +static const u32 a7xx_gbif_registers[] = { + 0x3c00, 0x3c0b, + 0x3c40, 0x3c42, + 0x3c45, 0x3c47, + 0x3c49, 0x3c4a, + 0x3cc0, 0x3cd1, +}; + +static const struct a6xx_registers a7xx_ahb_reglist[] = { + REGS(a7xx_ahb_registers, 0, 0), +}; + +static const struct a6xx_registers a7xx_gbif_reglist = + REGS(a7xx_gbif_registers, 0, 0); + static const u32 a6xx_gmu_gx_registers[] = { /* GMU GX */ 0x0000, 0x0000, 0x0010, 0x0013, 0x0016, 0x0016, 0x0018, 0x001b, @@ -384,14 +406,17 @@ static const struct a6xx_registers a6xx_gmu_reglist[] = { }; static u32 a6xx_get_cp_roq_size(struct msm_gpu *gpu); +static u32 a7xx_get_cp_roq_size(struct msm_gpu *gpu); -static struct a6xx_indexed_registers { +struct a6xx_indexed_registers { const char *name; u32 addr; u32 data; u32 count; u32 (*count_fn)(struct msm_gpu *gpu); -} a6xx_indexed_reglist[] = { +}; + +static struct a6xx_indexed_registers a6xx_indexed_reglist[] = { { "CP_SQE_STAT", REG_A6XX_CP_SQE_STAT_ADDR, REG_A6XX_CP_SQE_STAT_DATA, 0x33, NULL }, { "CP_DRAW_STATE", REG_A6XX_CP_DRAW_STATE_ADDR, @@ -402,11 +427,43 @@ static struct a6xx_indexed_registers { REG_A6XX_CP_ROQ_DBG_DATA, 0, a6xx_get_cp_roq_size}, }; +static struct a6xx_indexed_registers a7xx_indexed_reglist[] = { + { "CP_SQE_STAT", REG_A6XX_CP_SQE_STAT_ADDR, + REG_A6XX_CP_SQE_STAT_DATA, 0x33, NULL }, + { "CP_DRAW_STATE", REG_A6XX_CP_DRAW_STATE_ADDR, + REG_A6XX_CP_DRAW_STATE_DATA, 0x100, NULL }, + { "CP_UCODE_DBG_DATA", REG_A6XX_CP_SQE_UCODE_DBG_ADDR, + REG_A6XX_CP_SQE_UCODE_DBG_DATA, 0x8000, NULL }, + { "CP_BV_SQE_STAT_ADDR", REG_A7XX_CP_BV_SQE_STAT_ADDR, + REG_A7XX_CP_BV_SQE_STAT_DATA, 0x33, NULL }, + { "CP_BV_DRAW_STATE_ADDR", REG_A7XX_CP_BV_DRAW_STATE_ADDR, + REG_A7XX_CP_BV_DRAW_STATE_DATA, 0x100, NULL }, + { "CP_BV_SQE_UCODE_DBG_ADDR", REG_A7XX_CP_BV_SQE_UCODE_DBG_ADDR, + REG_A7XX_CP_BV_SQE_UCODE_DBG_DATA, 0x8000, NULL }, + { "CP_SQE_AC_STAT_ADDR", REG_A7XX_CP_SQE_AC_STAT_ADDR, + REG_A7XX_CP_SQE_AC_STAT_DATA, 0x33, NULL }, + { "CP_LPAC_DRAW_STATE_ADDR", REG_A7XX_CP_LPAC_DRAW_STATE_ADDR, + REG_A7XX_CP_LPAC_DRAW_STATE_DATA, 0x100, NULL }, + { "CP_SQE_AC_UCODE_DBG_ADDR", REG_A7XX_CP_SQE_AC_UCODE_DBG_ADDR, + REG_A7XX_CP_SQE_AC_UCODE_DBG_DATA, 0x8000, NULL }, + { "CP_LPAC_FIFO_DBG_ADDR", REG_A7XX_CP_LPAC_FIFO_DBG_ADDR, + REG_A7XX_CP_LPAC_FIFO_DBG_DATA, 0x40, NULL }, + { "CP_ROQ", REG_A6XX_CP_ROQ_DBG_ADDR, + REG_A6XX_CP_ROQ_DBG_DATA, 0, a7xx_get_cp_roq_size }, +}; + static struct a6xx_indexed_registers a6xx_cp_mempool_indexed = { "CP_MEMPOOL", REG_A6XX_CP_MEM_POOL_DBG_ADDR, REG_A6XX_CP_MEM_POOL_DBG_DATA, 0x2060, NULL, }; +static struct a6xx_indexed_registers a7xx_cp_bv_mempool_indexed[] = { + { "CP_MEMPOOL", REG_A6XX_CP_MEM_POOL_DBG_ADDR, + REG_A6XX_CP_MEM_POOL_DBG_DATA, 0x2100, NULL }, + { "CP_BV_MEMPOOL", REG_A7XX_CP_BV_MEM_POOL_DBG_ADDR, + REG_A7XX_CP_BV_MEM_POOL_DBG_DATA, 0x2100, NULL }, +}; + #define DEBUGBUS(_id, _count) { .id = _id, .name = #_id, .count = _count } static const struct a6xx_debugbus_block { From patchwork Wed Aug 23 12:56:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Dybcio X-Patchwork-Id: 716260 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80F57EE49B5 for ; Wed, 23 Aug 2023 12:56:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235422AbjHWM4p (ORCPT ); Wed, 23 Aug 2023 08:56:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55630 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235360AbjHWM4h (ORCPT ); Wed, 23 Aug 2023 08:56:37 -0400 Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com [IPv6:2a00:1450:4864:20::22c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04DF8E63 for ; Wed, 23 Aug 2023 05:56:16 -0700 (PDT) Received: by mail-lj1-x22c.google.com with SMTP id 38308e7fff4ca-2b95d5ee18dso86372171fa.1 for ; Wed, 23 Aug 2023 05:56:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1692795375; x=1693400175; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=WML/yephaQhDmSkYaQilPoOdlfPN/mMwj3HRzRGa9sQ=; b=ceQgV3ViQC6BYsSciNJ8Sb77oNmOUxrP42TGeRQVplyv0/Syln37fBYK6c/1XoLIeu 2sXKIHaSaKzH2Q7veoq6uznglB/64POjlU2WXtLLQLB9pG74trW5upvv1N6aB94BOKmu GiU2Ed1qsR3mlHR5JUyHicD4KUe3VhL5MD/7emkgO6L38RrQ+B7g3+0qvE3Xl8daue5S DaEq+vghyPCFt/NlsFQURtG0L3I8SSWqH9NXVlYjbEeJjM4t4+NFwZ4zY5i/o/53S7pH r3ldO5XfIaFcVUUKWxZAFZXoxhDQW00IR1vw3YU/YxeR0v1j0rWTERimzU8ZFD0c76SL pjwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692795375; x=1693400175; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WML/yephaQhDmSkYaQilPoOdlfPN/mMwj3HRzRGa9sQ=; b=G+NpoofAaezDTC0x2R2dIQkIjL1eb2qIKBI//2d3E64GOYpn5jmGl7sCFSKQAvU4ZH XZhBMhnXf2QLezvPo/TPp9SEYLKEZyxOizzKGhlIz4qyHboj9/uH11NwwmBr8JoONvv2 kvPHnJspkWz2YYnM1pOCD12fQhDAOK4mjnlOwOkvYrHvGERg4prLVt4h69BOfv/9bpmh aAcKad1b3QNXoufHG0/lYU6DSMSHiiE8xoy9pE3aPliVamLvnUdkZHOCtJzeErB/2ZPp wzZBsCR+DQ0PuoW4eN+EL5CA+RtyY721jc6HO3tNtaXBQ4DoLwi8YmRbLXzr3hNS8j3t Jjrw== X-Gm-Message-State: AOJu0YzsJSWGcSxAjen+GaCUyGeNC/2mqBaA8EErAFlD1YSw4Oky0fTF 01dI7GIzRKonNoH8u0gnvMk3iA== X-Google-Smtp-Source: AGHT+IEuUJpmZVu+/uPNhfKbdCCkm0GkQTrh1fP5WTUuymrLJwbuyLzlFyb6c6K3+SohT5jzeUQAsg== X-Received: by 2002:a2e:3208:0:b0:2bc:d5c3:e86e with SMTP id y8-20020a2e3208000000b002bcd5c3e86emr3279689ljy.4.1692795375196; Wed, 23 Aug 2023 05:56:15 -0700 (PDT) Received: from [192.168.1.101] (abyj76.neoplus.adsl.tpnet.pl. [83.9.29.76]) by smtp.gmail.com with ESMTPSA id a18-20020a05651c011200b002b6db0ed72fsm3220256ljb.48.2023.08.23.05.56.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Aug 2023 05:56:14 -0700 (PDT) From: Konrad Dybcio Date: Wed, 23 Aug 2023 14:56:03 +0200 Subject: [PATCH v3 10/10] drm/msm/a6xx: Poll for GBIF unhalt status in hw_init MIME-Version: 1.0 Message-Id: <20230628-topic-a7xx_drmmsm-v3-10-4ee67ccbaf9d@linaro.org> References: <20230628-topic-a7xx_drmmsm-v3-0-4ee67ccbaf9d@linaro.org> In-Reply-To: <20230628-topic-a7xx_drmmsm-v3-0-4ee67ccbaf9d@linaro.org> To: Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Bjorn Andersson Cc: Marijn Suijten , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, Konrad Dybcio , Neil Armstrong X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1692795358; l=1394; i=konrad.dybcio@linaro.org; s=20230215; h=from:subject:message-id; bh=S69YtysEIHJMxY5KNnuCyMTYR/1rqHR9OdffW35mR5k=; b=C+vhtS17NfkOlAy63aCz44wrp4s5iNe+t3ePly8Xm3w/2HTmaNPacD2OedVXggpsO/jOlFh+w dVmnJNP5eMbDggeTQAkCqDDvoveGRVe/ElqK1WiZmLPox+EHCX5SA+Z X-Developer-Key: i=konrad.dybcio@linaro.org; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Some GPUs - particularly A7xx ones - are really really stubborn and sometimes take a longer-than-expected time to finish unhalting GBIF. Note that this is not caused by the request a few lines above. Poll for the unhalt ack to make sure we're not trying to write bits to an essentially dead GPU that can't receive data on its end of the bus. Failing to do this will result in inexplicable GMU timeouts or worse. This is a rather ugly hack which introduces a whole lot of latency. Tested-by: Neil Armstrong # on SM8550-QRD Tested-by: Dmitry Baryshkov # sm8450 Signed-off-by: Konrad Dybcio --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 2313620084b6..11cb410e0ac7 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -1629,6 +1629,10 @@ static int hw_init(struct msm_gpu *gpu) mb(); } + /* Some GPUs are stubborn and take their sweet time to unhalt GBIF! */ + if (adreno_is_a7xx(adreno_gpu) && a6xx_has_gbif(adreno_gpu)) + spin_until(!gpu_read(gpu, REG_A6XX_GBIF_HALT_ACK)); + gpu_write(gpu, REG_A6XX_RBBM_SECVID_TSB_CNTL, 0); if (adreno_is_a619_holi(adreno_gpu))