From patchwork Wed Mar 2 17:27:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil P Oommen X-Patchwork-Id: 548222 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B40BC433EF for ; Wed, 2 Mar 2022 17:28:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243935AbiCBR3l (ORCPT ); Wed, 2 Mar 2022 12:29:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34116 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243907AbiCBR3G (ORCPT ); Wed, 2 Mar 2022 12:29:06 -0500 Received: from so254-9.mailgun.net (so254-9.mailgun.net [198.61.254.9]) by lindbergh.monkeyblade.net (Postfix) with UTF8SMTPS id 5DC654B862 for ; Wed, 2 Mar 2022 09:28:04 -0800 (PST) DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1646242086; h=References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=odXa6b5fU8MQd02N9WcX5w0ikkxNFvzlNrIqygo8Kz0=; b=vCHl3euLBPnxsyh7lqjsqky0/DFv/sa21NuUvL2LxkXUp17TfSZrEo403x2L1gJOHaPLEjCP mLxhiTHf9wo5YVcJI32lPFDMPqBQRyhxFKbPGkiu1rK9lQAy2pjEs8jddfS73w/nvVMLXe/4 dwXiUOqiFvAlVZKm7kLQTuQdCfY= X-Mailgun-Sending-Ip: 198.61.254.9 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n02.prod.us-west-2.postgun.com with SMTP id 621fa921ea5f8dddb54808a3 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Wed, 02 Mar 2022 17:28:01 GMT Sender: quic_akhilpo=quicinc.com@mg.codeaurora.org Received: by smtp.codeaurora.org (Postfix, from userid 1001) id 9B402C43619; Wed, 2 Mar 2022 17:28:01 +0000 (UTC) Received: from hyd-lnxbld559.qualcomm.com (unknown [202.46.22.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: akhilpo) by smtp.codeaurora.org (Postfix) with ESMTPSA id DA49DC43617; Wed, 2 Mar 2022 17:27:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 smtp.codeaurora.org DA49DC43617 Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=fail (p=none dis=none) header.from=quicinc.com Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=fail smtp.mailfrom=quicinc.com From: Akhil P Oommen To: freedreno , dri-devel@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Dmitry Baryshkov , Bjorn Andersson Cc: Abhinav Kumar , Dan Carpenter , Daniel Vetter , David Airlie , Douglas Anderson , Jonathan Marek , Jordan Crouse , Sean Paul , Wang Qing , linux-kernel@vger.kernel.org Subject: [PATCH v1 02/10] drm/msm/a6xx: Send NMI to gmu when it is hung Date: Wed, 2 Mar 2022 22:57:28 +0530 Message-Id: <20220302225551.v1.2.Icda301aa85f1e4367601fa9b830b3365d377e669@changeid> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1646242056-2456-1-git-send-email-quic_akhilpo@quicinc.com> References: <1646242056-2456-1-git-send-email-quic_akhilpo@quicinc.com> Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org While capturing gmu state, first send an NMI to gmu when it is hung. This helps to move gmu to a safe state. Signed-off-by: Akhil P Oommen Reported-by: kernel test robot --- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 37 +++++++++++++++++++++++++++++ drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 1 + drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c | 14 ++++++++++- 3 files changed, 51 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 3e325e2..f208a81 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -14,6 +14,37 @@ #include "msm_gpu_trace.h" #include "msm_mmu.h" +void a6xx_gmu_send_nmi(struct a6xx_gmu *gmu) +{ + struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu); + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; + struct msm_gpu *gpu = &adreno_gpu->base; + u32 val; + + if (a6xx_gmu_gx_is_on(gmu) && a6xx_is_smmu_stalled(gpu)) { + DRM_DEV_ERROR(gmu->dev, + "Skipping GMU NMI since SMMU is stalled\n"); + } + + /* Don't retrigger NMI if gmu reset is already active */ + val = gmu_read(gmu, REG_A6XX_GMU_CM3_FW_INIT_RESULT); + if (val & 0xE00) + return; + + /* Mask all interrupts from GMU first */ + gmu_write(gmu, REG_A6XX_GMU_GMU2HOST_INTR_MASK, 0xFFFFFFFF); + + /* Trigger NMI to make gmu save it's internal state to ddr */ + val = gmu_read(gmu, REG_A6XX_GMU_CM3_CFG); + gmu_write(gmu, REG_A6XX_GMU_CM3_CFG, val | BIT(9)); + + /* Barrier to ensure write is posted before we proceed */ + wmb(); + + /* Small delay to ensure state copy is ddr is complete at GMU */ + udelay(200); +} + static void a6xx_gmu_fault(struct a6xx_gmu *gmu) { struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu); @@ -790,6 +821,12 @@ static int a6xx_gmu_fw_start(struct a6xx_gmu *gmu, unsigned int state) gmu_write(gmu, REG_A6XX_GMU_CM3_FW_INIT_RESULT, 0); gmu_write(gmu, REG_A6XX_GMU_CM3_BOOT_CONFIG, 0x02); + /* + * Make sure that the NMI bit is cleared by configuring the reset value + * here + */ + gmu_write(gmu, REG_A6XX_GMU_CM3_CFG, 0x4052); + /* Write the iova of the HFI table */ gmu_write(gmu, REG_A6XX_GMU_HFI_QTBL_ADDR, gmu->hfi.iova); gmu_write(gmu, REG_A6XX_GMU_HFI_QTBL_INFO, 1); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h index 84bd516..4228ec1 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h @@ -186,5 +186,6 @@ int a6xx_hfi_set_freq(struct a6xx_gmu *gmu, int index); bool a6xx_gmu_gx_is_on(struct a6xx_gmu *gmu); bool a6xx_gmu_sptprac_is_on(struct a6xx_gmu *gmu); +void a6xx_gmu_send_nmi(struct a6xx_gmu *gmu); #endif diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c index 7de9d2f..09b2ff0 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c @@ -964,6 +964,18 @@ static void a6xx_get_indexed_registers(struct msm_gpu *gpu, a6xx_state->nr_indexed_regs = count; } +void a6xx_get_gmu_state(struct msm_gpu *gpu, struct a6xx_gpu_state *a6xx_state) +{ + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); + struct a6xx_gmu *gmu = &a6xx_gpu->gmu; + + if (gmu->hung) + a6xx_gmu_send_nmi(gmu); + + a6xx_get_gmu_registers(gpu, a6xx_state); +} + struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu) { struct a6xx_crashdumper _dumper = { 0 }, *dumper = NULL; @@ -980,7 +992,7 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu) /* Get the generic state from the adreno core */ adreno_gpu_state_get(gpu, &a6xx_state->base); - a6xx_get_gmu_registers(gpu, a6xx_state); + a6xx_get_gmu_state(gpu, a6xx_state); a6xx_state->gmu_log = a6xx_snapshot_gmu_bo(a6xx_state, &a6xx_gpu->gmu.log); a6xx_state->gmu_hfi = a6xx_snapshot_gmu_bo(a6xx_state, &a6xx_gpu->gmu.hfi);