From patchwork Fri Feb 4 21:17:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abhinav Kumar X-Patchwork-Id: 539913 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93C76C433F5 for ; Fri, 4 Feb 2022 21:19:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244315AbiBDVTQ (ORCPT ); Fri, 4 Feb 2022 16:19:16 -0500 Received: from alexa-out-sd-01.qualcomm.com ([199.106.114.38]:38681 "EHLO alexa-out-sd-01.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346332AbiBDVRk (ORCPT ); Fri, 4 Feb 2022 16:17:40 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1644009461; x=1675545461; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=2aALIW7IqNAs6C2Npn+GsQjF0lt1O5PKxAykRlLfUfY=; b=Li2sVeOxsPPz0HHp7+f3R1752w7FW0PCVbc5EfBmq7LfQSdfBjJvIY3A eW8Vahwistf9Yccp0Cleodznez0yGY68V6DkyzVsPvHYOgIdfmdq4dQ+e EpU/VCPaNLv83s90eSPxhBMM52cGafoMu0ax1sA+cbXyFwoDTQit89t6c I=; Received: from unknown (HELO ironmsg05-sd.qualcomm.com) ([10.53.140.145]) by alexa-out-sd-01.qualcomm.com with ESMTP; 04 Feb 2022 13:17:40 -0800 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg05-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2022 13:17:40 -0800 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:17:39 -0800 Received: from abhinavk-linux.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:17:39 -0800 From: Abhinav Kumar To: CC: Abhinav Kumar , , , , , , , , , , , Subject: [PATCH 01/12] drm/msm/dpu: add writeback blocks to the sm8250 DPU catalog Date: Fri, 4 Feb 2022 13:17:14 -0800 Message-ID: <1644009445-17320-2-git-send-email-quic_abhinavk@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> References: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add writeback blocks to the sm8250 DPU hardware catalog. Other chipsets support writeback too but add it to sm8250 to prototype the feature so that it can be easily extended to other chipsets. Signed-off-by: Abhinav Kumar Reviewed-by: Dmitry Baryshkov --- drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c | 73 +++++++++++++++++++++++++- drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h | 66 ++++++++++++++++++++++- 2 files changed, 137 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c index aa75991..fdd878d 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only -/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. +/* Copyright (c) 2022. Qualcomm Innovation Center, Inc. All rights reserved. + * Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. */ #define pr_fmt(fmt) "[drm:%s:%d] " fmt, __func__, __LINE__ @@ -90,6 +91,15 @@ BIT(MDP_INTF3_INTR) | \ BIT(MDP_INTF4_INTR)) +#define WB_SM8250_MASK (BIT(DPU_WB_LINE_MODE) | \ + BIT(DPU_WB_UBWC) | \ + BIT(DPU_WB_YUV_CONFIG) | \ + BIT(DPU_WB_PIPE_ALPHA) | \ + BIT(DPU_WB_XY_ROI_OFFSET) | \ + BIT(DPU_WB_QOS) | \ + BIT(DPU_WB_QOS_8LVL) | \ + BIT(DPU_WB_CDP) | \ + BIT(DPU_WB_INPUT_CTRL)) #define DEFAULT_PIXEL_RAM_SIZE (50 * 1024) #define DEFAULT_DPU_LINE_WIDTH 2048 @@ -177,6 +187,40 @@ static const uint32_t plane_formats_yuv[] = { DRM_FORMAT_YVU420, }; +static const uint32_t wb2_formats[] = { + DRM_FORMAT_RGB565, + DRM_FORMAT_BGR565, + DRM_FORMAT_RGB888, + DRM_FORMAT_ARGB8888, + DRM_FORMAT_RGBA8888, + DRM_FORMAT_ABGR8888, + DRM_FORMAT_XRGB8888, + DRM_FORMAT_RGBX8888, + DRM_FORMAT_XBGR8888, + DRM_FORMAT_ARGB1555, + DRM_FORMAT_RGBA5551, + DRM_FORMAT_XRGB1555, + DRM_FORMAT_RGBX5551, + DRM_FORMAT_ARGB4444, + DRM_FORMAT_RGBA4444, + DRM_FORMAT_RGBX4444, + DRM_FORMAT_XRGB4444, + DRM_FORMAT_BGR565, + DRM_FORMAT_BGR888, + DRM_FORMAT_ABGR8888, + DRM_FORMAT_BGRA8888, + DRM_FORMAT_BGRX8888, + DRM_FORMAT_XBGR8888, + DRM_FORMAT_ABGR1555, + DRM_FORMAT_BGRA5551, + DRM_FORMAT_XBGR1555, + DRM_FORMAT_BGRX5551, + DRM_FORMAT_ABGR4444, + DRM_FORMAT_BGRA4444, + DRM_FORMAT_BGRX4444, + DRM_FORMAT_XBGR4444, +}; + /************************************************************* * DPU sub blocks config *************************************************************/ @@ -317,6 +361,8 @@ static const struct dpu_mdp_cfg sm8250_mdp[] = { .reg_off = 0x2C4, .bit_off = 8}, .clk_ctrls[DPU_CLK_CTRL_REG_DMA] = { .reg_off = 0x2BC, .bit_off = 20}, + .clk_ctrls[DPU_CLK_CTRL_WB2] = { + .reg_off = 0x3B8, .bit_off = 24}, }, }; @@ -862,6 +908,29 @@ static const struct dpu_intf_cfg sc7280_intf[] = { }; /************************************************************* + * Writeback blocks config + *************************************************************/ +#define WB_BLK(_name, _id, _base, _features, _clk_ctrl, \ + __xin_id, vbif_id, _reg, _wb_done_bit) \ + { \ + .name = _name, .id = _id, \ + .base = _base, .len = 0x2c8, \ + .features = _features, \ + .format_list = wb2_formats, \ + .num_formats = ARRAY_SIZE(wb2_formats), \ + .clk_ctrl = _clk_ctrl, \ + .xin_id = __xin_id, \ + .vbif_idx = vbif_id, \ + .maxlinewidth = DEFAULT_DPU_LINE_WIDTH, \ + .intr_wb_done = DPU_IRQ_IDX(_reg, _wb_done_bit) \ + } + +static const struct dpu_wb_cfg sm8250_wb[] = { + WB_BLK("wb_2", WB_2, 0x65000, WB_SM8250_MASK, DPU_CLK_CTRL_WB2, 6, + VBIF_RT, MDP_SSPP_TOP0_INTR, 4), +}; + +/************************************************************* * VBIF sub blocks config *************************************************************/ /* VBIF QOS remap */ @@ -1225,6 +1294,8 @@ static void sm8250_cfg_init(struct dpu_mdss_cfg *dpu_cfg) .intf = sm8150_intf, .vbif_count = ARRAY_SIZE(sdm845_vbif), .vbif = sdm845_vbif, + .wb_count = ARRAY_SIZE(sm8250_wb), + .wb = sm8250_wb, .reg_dma_count = 1, .dma_cfg = sm8250_regdma, .perf = sm8250_perf_data, diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h index 31af04a..a3ca695 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h @@ -1,5 +1,7 @@ /* SPDX-License-Identifier: GPL-2.0-only */ -/* Copyright (c) 2015-2018, 2020 The Linux Foundation. All rights reserved. +/* + * Copyright (c) 2022. Qualcomm Innovation Center, Inc. All rights reserved. + * Copyright (c) 2015-2018, 2020 The Linux Foundation. All rights reserved. */ #ifndef _DPU_HW_CATALOG_H @@ -209,6 +211,42 @@ enum { }; /** + * WB sub-blocks and features + * @DPU_WB_LINE_MODE Writeback module supports line/linear mode + * @DPU_WB_BLOCK_MODE Writeback module supports block mode read + * @DPU_WB_CHROMA_DOWN, Writeback chroma down block, + * @DPU_WB_DOWNSCALE, Writeback integer downscaler, + * @DPU_WB_DITHER, Dither block + * @DPU_WB_TRAFFIC_SHAPER, Writeback traffic shaper bloc + * @DPU_WB_UBWC, Writeback Universal bandwidth compression + * @DPU_WB_YUV_CONFIG Writeback supports output of YUV colorspace + * @DPU_WB_PIPE_ALPHA Writeback supports pipe alpha + * @DPU_WB_XY_ROI_OFFSET Writeback supports x/y-offset of out ROI in + * the destination image + * @DPU_WB_QOS, Writeback supports QoS control, danger/safe/creq + * @DPU_WB_QOS_8LVL, Writeback supports 8-level QoS control + * @DPU_WB_CDP Writeback supports client driven prefetch + * @DPU_WB_INPUT_CTRL Writeback supports from which pp block input pixel + * data arrives. + * @DPU_WB_CROP CWB supports cropping + * @DPU_WB_MAX maximum value + */ +enum { + DPU_WB_LINE_MODE = 0x1, + DPU_WB_BLOCK_MODE, + DPU_WB_UBWC, + DPU_WB_YUV_CONFIG, + DPU_WB_PIPE_ALPHA, + DPU_WB_XY_ROI_OFFSET, + DPU_WB_QOS, + DPU_WB_QOS_8LVL, + DPU_WB_CDP, + DPU_WB_INPUT_CTRL, + DPU_WB_CROP, + DPU_WB_MAX +}; + +/** * VBIF sub-blocks and features * @DPU_VBIF_QOS_OTLIM VBIF supports OT Limit * @DPU_VBIF_QOS_REMAP VBIF supports QoS priority remap @@ -439,6 +477,7 @@ enum dpu_clk_ctrl_type { DPU_CLK_CTRL_CURSOR1, DPU_CLK_CTRL_INLINE_ROT0_SSPP, DPU_CLK_CTRL_REG_DMA, + DPU_CLK_CTRL_WB2, DPU_CLK_CTRL_MAX, }; @@ -577,6 +616,28 @@ struct dpu_intf_cfg { }; /** + * struct dpu_wb_cfg - information of writeback blocks + * @DPU_HW_BLK_INFO: refer to the description above for DPU_HW_BLK_INFO + * @vbif_idx: vbif client index + * @maxlinewidth: max line width supported by writeback block + * @xin_id: bus client identifier + * @intr_wb_done: interrupt index for WB_DONE + * @format_list: list of formats supported by this writeback block + * @num_formats: number of formats supported by this writeback block + * @clk_ctrl: clock control identifier + */ +struct dpu_wb_cfg { + DPU_HW_BLK_INFO; + u8 vbif_idx; + u32 maxlinewidth; + u32 xin_id; + s32 intr_wb_done; + const u32 *format_list; + u32 num_formats; + enum dpu_clk_ctrl_type clk_ctrl; +}; + +/** * struct dpu_vbif_dynamic_ot_cfg - dynamic OT setting * @pps pixel per seconds * @ot_limit OT limit to use up to specified pixel per second @@ -758,6 +819,9 @@ struct dpu_mdss_cfg { u32 vbif_count; const struct dpu_vbif_cfg *vbif; + u32 wb_count; + const struct dpu_wb_cfg *wb; + u32 reg_dma_count; struct dpu_reg_dma_cfg dma_cfg; From patchwork Fri Feb 4 21:17:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abhinav Kumar X-Patchwork-Id: 540087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50115C433EF for ; Fri, 4 Feb 2022 21:19:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233006AbiBDVSy (ORCPT ); Fri, 4 Feb 2022 16:18:54 -0500 Received: from alexa-out.qualcomm.com ([129.46.98.28]:9680 "EHLO alexa-out.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241761AbiBDVRr (ORCPT ); Fri, 4 Feb 2022 16:17:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1644009467; x=1675545467; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=/wD7+hPfHDed148UKINmEfjni+K3mj2sQihZeFWcuQw=; b=moo48XpeolhJKx+qLq1kFRyl1tbsYFI+GNiLPreolyXJ5LUzUoo1l56U ydL0IFIHJDhtzZvDkgwYz9WLNuVRrGO24wXAlMSl8qIcpBVflsoQNZyaT Zimtv0O2oe2G2yFf4XWFirNVUNcYW5HTkR99DbrheTuifHcQ/BdE8MsmN s=; Received: from ironmsg09-lv.qualcomm.com ([10.47.202.153]) by alexa-out.qualcomm.com with ESMTP; 04 Feb 2022 13:17:47 -0800 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg09-lv.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2022 13:17:46 -0800 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:17:46 -0800 Received: from abhinavk-linux.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:17:45 -0800 From: Abhinav Kumar To: CC: Abhinav Kumar , , , , , , , , , , , Subject: [PATCH 02/12] drm/msm/dpu: add dpu_hw_wb abstraction for writeback blocks Date: Fri, 4 Feb 2022 13:17:15 -0800 Message-ID: <1644009445-17320-3-git-send-email-quic_abhinavk@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> References: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add the dpu_hw_wb abstraction to program registers related to the writeback block. These will be invoked once all the configuration is set and ready to be programmed to the registers. Signed-off-by: Abhinav Kumar --- drivers/gpu/drm/msm/Makefile | 1 + drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.c | 267 ++++++++++++++++++++++++++++++ drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.h | 145 ++++++++++++++++ 3 files changed, 413 insertions(+) create mode 100644 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.c create mode 100644 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.h diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile index 03ab55c..c43ef35 100644 --- a/drivers/gpu/drm/msm/Makefile +++ b/drivers/gpu/drm/msm/Makefile @@ -66,6 +66,7 @@ msm-y := \ disp/dpu1/dpu_hw_top.o \ disp/dpu1/dpu_hw_util.o \ disp/dpu1/dpu_hw_vbif.o \ + disp/dpu1/dpu_hw_wb.o \ disp/dpu1/dpu_io_util.o \ disp/dpu1/dpu_kms.o \ disp/dpu1/dpu_mdss.o \ diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.c new file mode 100644 index 0000000..d395475 --- /dev/null +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.c @@ -0,0 +1,267 @@ +// SPDX-License-Identifier: GPL-2.0-only + /* + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved + */ + +#include "dpu_hw_mdss.h" +#include "dpu_hwio.h" +#include "dpu_hw_catalog.h" +#include "dpu_hw_wb.h" +#include "dpu_formats.h" +#include "dpu_kms.h" + +#define WB_DST_FORMAT 0x000 +#define WB_DST_OP_MODE 0x004 +#define WB_DST_PACK_PATTERN 0x008 +#define WB_DST0_ADDR 0x00C +#define WB_DST1_ADDR 0x010 +#define WB_DST2_ADDR 0x014 +#define WB_DST3_ADDR 0x018 +#define WB_DST_YSTRIDE0 0x01C +#define WB_DST_YSTRIDE1 0x020 +#define WB_DST_YSTRIDE1 0x020 +#define WB_DST_DITHER_BITDEPTH 0x024 +#define WB_DST_MATRIX_ROW0 0x030 +#define WB_DST_MATRIX_ROW1 0x034 +#define WB_DST_MATRIX_ROW2 0x038 +#define WB_DST_MATRIX_ROW3 0x03C +#define WB_DST_WRITE_CONFIG 0x048 +#define WB_ROTATION_DNSCALER 0x050 +#define WB_ROTATOR_PIPE_DOWNSCALER 0x054 +#define WB_N16_INIT_PHASE_X_C03 0x060 +#define WB_N16_INIT_PHASE_X_C12 0x064 +#define WB_N16_INIT_PHASE_Y_C03 0x068 +#define WB_N16_INIT_PHASE_Y_C12 0x06C +#define WB_OUT_SIZE 0x074 +#define WB_ALPHA_X_VALUE 0x078 +#define WB_DANGER_LUT 0x084 +#define WB_SAFE_LUT 0x088 +#define WB_QOS_CTRL 0x090 +#define WB_CREQ_LUT_0 0x098 +#define WB_CREQ_LUT_1 0x09C +#define WB_UBWC_STATIC_CTRL 0x144 +#define WB_MUX 0x150 +#define WB_CROP_CTRL 0x154 +#define WB_CROP_OFFSET 0x158 +#define WB_CSC_BASE 0x260 +#define WB_DST_ADDR_SW_STATUS 0x2B0 +#define WB_CDP_CNTL 0x2B4 +#define WB_OUT_IMAGE_SIZE 0x2C0 +#define WB_OUT_XY 0x2C4 + +/* WB_QOS_CTRL */ +#define WB_QOS_CTRL_DANGER_SAFE_EN BIT(0) + +static const struct dpu_wb_cfg *_wb_offset(enum dpu_wb wb, + const struct dpu_mdss_cfg *m, void __iomem *addr, + struct dpu_hw_blk_reg_map *b) +{ + int i; + + for (i = 0; i < m->wb_count; i++) { + if (wb == m->wb[i].id) { + b->base_off = addr; + b->blk_off = m->wb[i].base; + b->length = m->wb[i].len; + b->hwversion = m->hwversion; + return &m->wb[i]; + } + } + return ERR_PTR(-EINVAL); +} + +static void dpu_hw_wb_setup_outaddress(struct dpu_hw_wb *ctx, + struct dpu_hw_wb_cfg *data) +{ + struct dpu_hw_blk_reg_map *c = &ctx->hw; + + DPU_REG_WRITE(c, WB_DST0_ADDR, data->dest.plane_addr[0]); + DPU_REG_WRITE(c, WB_DST1_ADDR, data->dest.plane_addr[1]); + DPU_REG_WRITE(c, WB_DST2_ADDR, data->dest.plane_addr[2]); + DPU_REG_WRITE(c, WB_DST3_ADDR, data->dest.plane_addr[3]); +} + +static void dpu_hw_wb_setup_format(struct dpu_hw_wb *ctx, + struct dpu_hw_wb_cfg *data) +{ + struct dpu_hw_blk_reg_map *c = &ctx->hw; + const struct dpu_format *fmt = data->dest.format; + u32 dst_format, pattern, ystride0, ystride1, outsize, chroma_samp; + u32 write_config = 0; + u32 opmode = 0; + u32 dst_addr_sw = 0; + + chroma_samp = fmt->chroma_sample; + + dst_format = (chroma_samp << 23) | + (fmt->fetch_planes << 19) | + (fmt->bits[C3_ALPHA] << 6) | + (fmt->bits[C2_R_Cr] << 4) | + (fmt->bits[C1_B_Cb] << 2) | + (fmt->bits[C0_G_Y] << 0); + + if (fmt->bits[C3_ALPHA] || fmt->alpha_enable) { + dst_format |= BIT(8); /* DSTC3_EN */ + if (!fmt->alpha_enable || + !(ctx->caps->features & BIT(DPU_WB_PIPE_ALPHA))) + dst_format |= BIT(14); /* DST_ALPHA_X */ + } + + pattern = (fmt->element[3] << 24) | + (fmt->element[2] << 16) | + (fmt->element[1] << 8) | + (fmt->element[0] << 0); + + dst_format |= (fmt->unpack_align_msb << 18) | + (fmt->unpack_tight << 17) | + ((fmt->unpack_count - 1) << 12) | + ((fmt->bpp - 1) << 9); + + ystride0 = data->dest.plane_pitch[0] | + (data->dest.plane_pitch[1] << 16); + ystride1 = data->dest.plane_pitch[2] | + (data->dest.plane_pitch[3] << 16); + + if (drm_rect_height(&data->roi) && drm_rect_width(&data->roi)) + outsize = (drm_rect_height(&data->roi) << 16) | drm_rect_width(&data->roi); + else + outsize = (data->dest.height << 16) | data->dest.width; + + DPU_REG_WRITE(c, WB_ALPHA_X_VALUE, 0xFF); + DPU_REG_WRITE(c, WB_DST_FORMAT, dst_format); + DPU_REG_WRITE(c, WB_DST_OP_MODE, opmode); + DPU_REG_WRITE(c, WB_DST_PACK_PATTERN, pattern); + DPU_REG_WRITE(c, WB_DST_YSTRIDE0, ystride0); + DPU_REG_WRITE(c, WB_DST_YSTRIDE1, ystride1); + DPU_REG_WRITE(c, WB_OUT_SIZE, outsize); + DPU_REG_WRITE(c, WB_DST_WRITE_CONFIG, write_config); + DPU_REG_WRITE(c, WB_DST_ADDR_SW_STATUS, dst_addr_sw); +} + +static void dpu_hw_wb_roi(struct dpu_hw_wb *ctx, struct dpu_hw_wb_cfg *wb) +{ + struct dpu_hw_blk_reg_map *c = &ctx->hw; + u32 image_size, out_size, out_xy; + + image_size = (wb->dest.height << 16) | wb->dest.width; + out_xy = 0; + out_size = (drm_rect_height(&wb->roi) << 16) | drm_rect_width(&wb->roi); + + DPU_REG_WRITE(c, WB_OUT_IMAGE_SIZE, image_size); + DPU_REG_WRITE(c, WB_OUT_XY, out_xy); + DPU_REG_WRITE(c, WB_OUT_SIZE, out_size); +} + +static void dpu_hw_wb_setup_qos_lut(struct dpu_hw_wb *ctx, + struct dpu_hw_wb_qos_cfg *cfg) +{ + struct dpu_hw_blk_reg_map *c = &ctx->hw; + u32 qos_ctrl = 0; + + if (!ctx || !cfg) + return; + + DPU_REG_WRITE(c, WB_DANGER_LUT, cfg->danger_lut); + DPU_REG_WRITE(c, WB_SAFE_LUT, cfg->safe_lut); + + if (ctx->caps && test_bit(DPU_WB_QOS_8LVL, &ctx->caps->features)) { + DPU_REG_WRITE(c, WB_CREQ_LUT_0, cfg->creq_lut); + DPU_REG_WRITE(c, WB_CREQ_LUT_1, cfg->creq_lut >> 32); + } + + if (cfg->danger_safe_en) + qos_ctrl |= WB_QOS_CTRL_DANGER_SAFE_EN; + + DPU_REG_WRITE(c, WB_QOS_CTRL, qos_ctrl); +} + +static void dpu_hw_wb_setup_cdp(struct dpu_hw_wb *ctx, + struct dpu_hw_wb_cdp_cfg *cfg) +{ + struct dpu_hw_blk_reg_map *c; + u32 cdp_cntl = 0; + + if (!ctx || !cfg) + return; + + c = &ctx->hw; + + if (cfg->enable) + cdp_cntl |= BIT(0); + if (cfg->ubwc_meta_enable) + cdp_cntl |= BIT(1); + if (cfg->preload_ahead == DPU_WB_CDP_PRELOAD_AHEAD_64) + cdp_cntl |= BIT(3); + + DPU_REG_WRITE(c, WB_CDP_CNTL, cdp_cntl); +} + +static void dpu_hw_wb_bind_pingpong_blk( + struct dpu_hw_wb *ctx, + bool enable, const enum dpu_pingpong pp) +{ + struct dpu_hw_blk_reg_map *c; + int mux_cfg = 0xF; + + if (!ctx) + return; + + c = &ctx->hw; + if (enable) + mux_cfg = (pp - PINGPONG_0) & 0x7; + + DPU_REG_WRITE(c, WB_MUX, mux_cfg); +} + +static void _setup_wb_ops(struct dpu_hw_wb_ops *ops, + unsigned long features) +{ + ops->setup_outaddress = dpu_hw_wb_setup_outaddress; + ops->setup_outformat = dpu_hw_wb_setup_format; + + if (test_bit(DPU_WB_XY_ROI_OFFSET, &features)) + ops->setup_roi = dpu_hw_wb_roi; + + if (test_bit(DPU_WB_QOS, &features)) + ops->setup_qos_lut = dpu_hw_wb_setup_qos_lut; + + if (test_bit(DPU_WB_CDP, &features)) + ops->setup_cdp = dpu_hw_wb_setup_cdp; + + if (test_bit(DPU_WB_INPUT_CTRL, &features)) + ops->bind_pingpong_blk = dpu_hw_wb_bind_pingpong_blk; +} + +struct dpu_hw_wb *dpu_hw_wb_init(enum dpu_wb idx, + void __iomem *addr, const struct dpu_mdss_cfg *m) +{ + struct dpu_hw_wb *c; + const struct dpu_wb_cfg *cfg; + + if (!addr || !m) + return ERR_PTR(-EINVAL); + + c = kzalloc(sizeof(*c), GFP_KERNEL); + if (!c) + return ERR_PTR(-ENOMEM); + + cfg = _wb_offset(idx, m, addr, &c->hw); + if (IS_ERR(cfg)) { + WARN(1, "Unable to find wb idx=%d\n", idx); + kfree(c); + return ERR_PTR(-EINVAL); + } + + /* Assign ops */ + c->mdp = &m->mdp[0]; + c->idx = idx; + c->caps = cfg; + _setup_wb_ops(&c->ops, c->caps->features); + + return c; +} + +void dpu_hw_wb_destroy(struct dpu_hw_wb *hw_wb) +{ + kfree(hw_wb); +} diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.h new file mode 100644 index 0000000..39d745f --- /dev/null +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.h @@ -0,0 +1,145 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved + */ + +#ifndef _DPU_HW_WB_H +#define _DPU_HW_WB_H + +#include "dpu_hw_catalog.h" +#include "dpu_hw_mdss.h" +#include "dpu_hw_top.h" +#include "dpu_hw_util.h" +#include "dpu_hw_pingpong.h" + +struct dpu_hw_wb; + +struct dpu_hw_wb_cfg { + struct dpu_hw_fmt_layout dest; + enum dpu_intf_mode intf_mode; + struct drm_rect roi; + struct drm_rect crop; +}; + +/** + * enum CDP preload ahead address size + */ +enum { + DPU_WB_CDP_PRELOAD_AHEAD_32, + DPU_WB_CDP_PRELOAD_AHEAD_64 +}; + +/** + * struct dpu_hw_wb_cdp_cfg : CDP configuration + * @enable: true to enable CDP + * @ubwc_meta_enable: true to enable ubwc metadata preload + * @tile_amortize_enable: true to enable amortization control for tile format + * @preload_ahead: number of request to preload ahead + * SDE_WB_CDP_PRELOAD_AHEAD_32, + * SDE_WB_CDP_PRELOAD_AHEAD_64 + */ +struct dpu_hw_wb_cdp_cfg { + bool enable; + bool ubwc_meta_enable; + bool tile_amortize_enable; + u32 preload_ahead; +}; + +/** + * struct dpu_hw_wb_qos_cfg : Writeback pipe QoS configuration + * @danger_lut: LUT for generate danger level based on fill level + * @safe_lut: LUT for generate safe level based on fill level + * @creq_lut: LUT for generate creq level based on fill level + * @danger_safe_en: enable danger safe generation + */ +struct dpu_hw_wb_qos_cfg { + u32 danger_lut; + u32 safe_lut; + u64 creq_lut; + bool danger_safe_en; +}; + +/** + * + * struct dpu_hw_wb_ops : Interface to the wb hw driver functions + * Assumption is these functions will be called after clocks are enabled + * @setup_outaddress: setup output address from the writeback job + * @setup_outformat: setup output format of writeback block from writeback job + * @setup_qos_lut: setup qos LUT for writeback block based on input + * @setup_cdp: setup chroma down prefetch block for writeback block + * @bind_pingpong_blk: enable/disable the connection with ping-pong block + */ +struct dpu_hw_wb_ops { + void (*setup_outaddress)(struct dpu_hw_wb *ctx, + struct dpu_hw_wb_cfg *wb); + + void (*setup_outformat)(struct dpu_hw_wb *ctx, + struct dpu_hw_wb_cfg *wb); + + void (*setup_roi)(struct dpu_hw_wb *ctx, + struct dpu_hw_wb_cfg *wb); + + void (*setup_qos_lut)(struct dpu_hw_wb *ctx, + struct dpu_hw_wb_qos_cfg *cfg); + + void (*setup_cdp)(struct dpu_hw_wb *ctx, + struct dpu_hw_wb_cdp_cfg *cfg); + + void (*bind_pingpong_blk)(struct dpu_hw_wb *ctx, + bool enable, const enum dpu_pingpong pp); +}; + +/** + * struct dpu_hw_wb : WB driver object + * @base: hardware block base structure + * @hw: block hardware details + * @mdp: pointer to associated mdp portion of the catalog + * @idx: hardware index number within type + * @wb_hw_caps: hardware capabilities + * @ops: function pointers + * @hw_mdp: MDP top level hardware block + */ +struct dpu_hw_wb { + struct dpu_hw_blk base; + struct dpu_hw_blk_reg_map hw; + const struct dpu_mdp_cfg *mdp; + + /* wb path */ + int idx; + const struct dpu_wb_cfg *caps; + + /* ops */ + struct dpu_hw_wb_ops ops; + + struct dpu_hw_mdp *hw_mdp; +}; + +/** + * dpu_hw_wb - convert base object dpu_hw_base to container + * @hw: Pointer to base hardware block + * return: Pointer to hardware block container + */ +static inline struct dpu_hw_wb *to_dpu_hw_wb(struct dpu_hw_blk *hw) +{ + return container_of(hw, struct dpu_hw_wb, base); +} + +/** + * dpu_hw_wb_init(): Initializes and return writeback hw driver object. + * @idx: wb_path index for which driver object is required + * @addr: mapped register io address of MDP + * @m : pointer to mdss catalog data + */ +struct dpu_hw_wb *dpu_hw_wb_init(enum dpu_wb idx, + void __iomem *addr, + const struct dpu_mdss_cfg *m); + +/** + * dpu_hw_wb_destroy(): Destroy writeback hw driver object. + * @hw_wb: Pointer to writeback hw driver object + */ +void dpu_hw_wb_destroy(struct dpu_hw_wb *hw_wb); + +#endif /*_DPU_HW_WB_H */ + + From patchwork Fri Feb 4 21:17:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abhinav Kumar X-Patchwork-Id: 539916 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FB70C433F5 for ; Fri, 4 Feb 2022 21:19:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234884AbiBDVSz (ORCPT ); Fri, 4 Feb 2022 16:18:55 -0500 Received: from alexa-out-sd-02.qualcomm.com ([199.106.114.39]:42023 "EHLO alexa-out-sd-02.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347030AbiBDVRt (ORCPT ); Fri, 4 Feb 2022 16:17:49 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1644009469; x=1675545469; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=OMGtz4WT+eqttSgXixTVcRCoWxojnt9vZ2LHA8zh9DE=; b=Lfgq1QYsHCvgP6nCkP7ZFgCIScbiIMWY0tIIxMntMA4nsIDyzCqDq4NU gjTk4Bcot94xL0kY4zP5lKPm1eCckZgXh1HTtVu+iz7A2lZjMcfmXiJPO ij0zy4u7b8mZ8hSzBkdkqFttYzdBIxv/vf4cywwyyEfUruPN111aS5Hcn Q=; Received: from unknown (HELO ironmsg05-sd.qualcomm.com) ([10.53.140.145]) by alexa-out-sd-02.qualcomm.com with ESMTP; 04 Feb 2022 13:17:49 -0800 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg05-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2022 13:17:49 -0800 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:17:49 -0800 Received: from abhinavk-linux.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:17:48 -0800 From: Abhinav Kumar To: CC: Abhinav Kumar , , , , , , , , , , , Subject: [PATCH 03/12] drm/msm/dpu: add writeback blocks to DPU RM Date: Fri, 4 Feb 2022 13:17:16 -0800 Message-ID: <1644009445-17320-4-git-send-email-quic_abhinavk@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> References: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add writeback blocks to DPU resource manager so that writeback encoders can request for writeback hardware blocks through RM and their usage can be tracked. Signed-off-by: Abhinav Kumar --- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h | 3 ++ drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h | 2 + drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c | 71 +++++++++++++++++++++++++++++ drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h | 2 + 4 files changed, 78 insertions(+) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h index e241914..cc10436 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. * Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. * Copyright (C) 2013 Red Hat * Author: Rob Clark @@ -21,9 +22,11 @@ /** * Encoder functions and data types * @intfs: Interfaces this encoder is using, INTF_MODE_NONE if unused + * @wbs: Writeback blocks this encoder is using */ struct dpu_encoder_hw_resources { enum dpu_intf_mode intfs[INTF_MAX]; + enum dpu_intf_mode wbs[WB_MAX]; }; /** diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h index 2d385b4..1e00804 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. * Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. * Copyright (C) 2013 Red Hat * Author: Rob Clark @@ -146,6 +147,7 @@ struct dpu_global_state { uint32_t ctl_to_enc_id[CTL_MAX - CTL_0]; uint32_t intf_to_enc_id[INTF_MAX - INTF_0]; uint32_t dspp_to_enc_id[DSPP_MAX - DSPP_0]; + uint32_t wb_to_enc_id[WB_MAX - WB_0]; }; struct dpu_global_state diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c index f9c83d6..edd0b7a 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only /* + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved. */ @@ -9,6 +10,7 @@ #include "dpu_hw_ctl.h" #include "dpu_hw_pingpong.h" #include "dpu_hw_intf.h" +#include "dpu_hw_wb.h" #include "dpu_hw_dspp.h" #include "dpu_hw_merge3d.h" #include "dpu_encoder.h" @@ -75,6 +77,14 @@ int dpu_rm_destroy(struct dpu_rm *rm) dpu_hw_intf_destroy(hw); } } + for (i = 0; i < ARRAY_SIZE(rm->wb_blks); i++) { + struct dpu_hw_wb *hw; + + if (rm->wb_blks[i]) { + hw = to_dpu_hw_wb(rm->wb_blks[i]); + dpu_hw_wb_destroy(hw); + } + } return 0; } @@ -187,6 +197,24 @@ int dpu_rm_init(struct dpu_rm *rm, rm->intf_blks[intf->id - INTF_0] = &hw->base; } + for (i = 0; i < cat->wb_count; i++) { + struct dpu_hw_wb *hw; + const struct dpu_wb_cfg *wb = &cat->wb[i]; + + if (wb->id < WB_0 || wb->id >= WB_MAX) { + DPU_ERROR("skip intf %d with invalid id\n", wb->id); + continue; + } + + hw = dpu_hw_wb_init(wb->id, mmio, cat); + if (IS_ERR_OR_NULL(hw)) { + rc = PTR_ERR(hw); + DPU_ERROR("failed wb object creation: err %d\n", rc); + goto fail; + } + rm->wb_blks[wb->id - WB_0] = &hw->base; + } + for (i = 0; i < cat->ctl_count; i++) { struct dpu_hw_ctl *hw; const struct dpu_ctl_cfg *ctl = &cat->ctl[i]; @@ -479,6 +507,33 @@ static int _dpu_rm_reserve_intf( return 0; } +static int _dpu_rm_reserve_wb( + struct dpu_rm *rm, + struct dpu_global_state *global_state, + uint32_t enc_id, + uint32_t id) +{ + int idx = id - WB_0; + + if (idx < 0 || idx >= ARRAY_SIZE(rm->wb_blks)) { + DPU_ERROR("invalid intf id: %d", id); + return -EINVAL; + } + + if (!rm->wb_blks[idx]) { + DPU_ERROR("couldn't find wb id %d\n", id); + return -EINVAL; + } + + if (reserved_by_other(global_state->wb_to_enc_id, idx, enc_id)) { + DPU_ERROR("intf id %d already reserved\n", id); + return -ENAVAIL; + } + + global_state->wb_to_enc_id[idx] = enc_id; + return 0; +} + static int _dpu_rm_reserve_intf_related_hw( struct dpu_rm *rm, struct dpu_global_state *global_state, @@ -497,6 +552,15 @@ static int _dpu_rm_reserve_intf_related_hw( return ret; } + for (i = 0; i < ARRAY_SIZE(hw_res->wbs); i++) { + if (hw_res->wbs[i] == INTF_MODE_NONE) + continue; + id = i + WB_0; + ret = _dpu_rm_reserve_wb(rm, global_state, enc_id, id); + if (ret) + return ret; + } + return ret; } @@ -567,6 +631,8 @@ void dpu_rm_release(struct dpu_global_state *global_state, ARRAY_SIZE(global_state->ctl_to_enc_id), enc->base.id); _dpu_rm_clear_mapping(global_state->intf_to_enc_id, ARRAY_SIZE(global_state->intf_to_enc_id), enc->base.id); + _dpu_rm_clear_mapping(global_state->wb_to_enc_id, + ARRAY_SIZE(global_state->wb_to_enc_id), enc->base.id); } int dpu_rm_reserve( @@ -635,6 +701,11 @@ int dpu_rm_get_assigned_resources(struct dpu_rm *rm, hw_to_enc_id = global_state->intf_to_enc_id; max_blks = ARRAY_SIZE(rm->intf_blks); break; + case DPU_HW_BLK_WB: + hw_blks = rm->wb_blks; + hw_to_enc_id = global_state->wb_to_enc_id; + max_blks = ARRAY_SIZE(rm->wb_blks); + break; case DPU_HW_BLK_DSPP: hw_blks = rm->dspp_blks; hw_to_enc_id = global_state->dspp_to_enc_id; diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h index 1f12c8d..a021409 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved. */ @@ -30,6 +31,7 @@ struct dpu_rm { struct dpu_hw_blk *intf_blks[INTF_MAX - INTF_0]; struct dpu_hw_blk *dspp_blks[DSPP_MAX - DSPP_0]; struct dpu_hw_blk *merge_3d_blks[MERGE_3D_MAX - MERGE_3D_0]; + struct dpu_hw_blk *wb_blks[WB_MAX - WB_0]; uint32_t lm_max_width; }; From patchwork Fri Feb 4 21:17:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abhinav Kumar X-Patchwork-Id: 540084 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CECDC433EF for ; Fri, 4 Feb 2022 21:20:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235523AbiBDVUN (ORCPT ); Fri, 4 Feb 2022 16:20:13 -0500 Received: from alexa-out-sd-01.qualcomm.com ([199.106.114.38]:34876 "EHLO alexa-out-sd-01.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242154AbiBDVTG (ORCPT ); Fri, 4 Feb 2022 16:19:06 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1644009535; x=1675545535; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=R4vc8veVE5vGtYvuP7FikCnvo+UspBg0N538lwNDY+A=; b=s+ok6uALHSYsYw7qsbWqO9Dv8RvVFKmcZxZMBP0fCh0Z9d5ee8pVyco7 7goDUSTjKemTzJc4glxLhse4fyi+y8EfoVjMOPFWAYEUHMgAY3gI9pMqg FOe43AWmSMF4UQbbzV8XmS9Fc0CJgexmjMolMUex/7Y+QDx3ESrTJjiQD A=; Received: from unknown (HELO ironmsg03-sd.qualcomm.com) ([10.53.140.143]) by alexa-out-sd-01.qualcomm.com with ESMTP; 04 Feb 2022 13:17:55 -0800 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg03-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2022 13:17:54 -0800 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:17:51 -0800 Received: from abhinavk-linux.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:17:50 -0800 From: Abhinav Kumar To: CC: Abhinav Kumar , , , , , , , , , , , Subject: [PATCH 04/12] drm/msm/dpu: add changes to support writeback in hw_ctl Date: Fri, 4 Feb 2022 13:17:17 -0800 Message-ID: <1644009445-17320-5-git-send-email-quic_abhinavk@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> References: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add changes to support writeback module in the dpu_hw_ctl interface. In addition inroduce a reset_intf_cfg op to reset the interface bits for the currently active interfaces in the ctl path. Signed-off-by: Abhinav Kumar --- .../gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c | 3 +- .../gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c | 6 +- drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c | 65 ++++++++++++++++++++-- drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.h | 27 ++++++++- 4 files changed, 91 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c index 34a6940..4cb72fa 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only /* + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. * Copyright (c) 2015-2018, 2020-2021 The Linux Foundation. All rights reserved. */ @@ -70,7 +71,7 @@ static void _dpu_encoder_phys_cmd_update_intf_cfg( intf_cfg.intf_mode_sel = DPU_CTL_MODE_SEL_CMD; intf_cfg.stream_sel = cmd_enc->stream_sel; intf_cfg.mode_3d = dpu_encoder_helper_get_3d_blend_mode(phys_enc); - ctl->ops.setup_intf_cfg(ctl, &intf_cfg); + ctl->ops.setup_intf_cfg(ctl, &intf_cfg, false); } static void dpu_encoder_phys_cmd_pp_tx_done_irq(void *arg, int irq_idx) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c index ddd9d89..950fcd6 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c @@ -1,5 +1,7 @@ // SPDX-License-Identifier: GPL-2.0-only -/* Copyright (c) 2015-2018, 2020-2021 The Linux Foundation. All rights reserved. +/* + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. + * Copyright (c) 2015-2018, 2020-2021 The Linux Foundation. All rights reserved. */ #define pr_fmt(fmt) "[drm:%s:%d] " fmt, __func__, __LINE__ @@ -290,7 +292,7 @@ static void dpu_encoder_phys_vid_setup_timing_engine( spin_lock_irqsave(phys_enc->enc_spinlock, lock_flags); phys_enc->hw_intf->ops.setup_timing_gen(phys_enc->hw_intf, &timing_params, fmt); - phys_enc->hw_ctl->ops.setup_intf_cfg(phys_enc->hw_ctl, &intf_cfg); + phys_enc->hw_ctl->ops.setup_intf_cfg(phys_enc->hw_ctl, &intf_cfg, false); /* setup which pp blk will connect to this intf */ if (phys_enc->hw_intf->ops.bind_pingpong_blk) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c index 02da9ec..a2069af 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only -/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. +/* Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. + * Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. */ #include @@ -23,8 +24,10 @@ #define CTL_SW_RESET 0x030 #define CTL_LAYER_EXTN_OFFSET 0x40 #define CTL_MERGE_3D_ACTIVE 0x0E4 +#define CTL_WB_ACTIVE 0x0EC #define CTL_INTF_ACTIVE 0x0F4 #define CTL_MERGE_3D_FLUSH 0x100 +#define CTL_WB_FLUSH 0x108 #define CTL_INTF_FLUSH 0x110 #define CTL_INTF_MASTER 0x134 #define CTL_FETCH_PIPE_ACTIVE 0x0FC @@ -35,6 +38,7 @@ #define DPU_REG_RESET_TIMEOUT_US 2000 #define MERGE_3D_IDX 23 #define INTF_IDX 31 +#define WB_IDX 16 #define CTL_INVALID_BIT 0xffff #define CTL_DEFAULT_GROUP_ID 0xf @@ -128,6 +132,9 @@ static inline void dpu_hw_ctl_trigger_flush_v1(struct dpu_hw_ctl *ctx) if (ctx->pending_flush_mask & BIT(INTF_IDX)) DPU_REG_WRITE(&ctx->hw, CTL_INTF_FLUSH, ctx->pending_intf_flush_mask); + if (ctx->pending_flush_mask & BIT(WB_IDX)) + DPU_REG_WRITE(&ctx->hw, CTL_WB_FLUSH, + ctx->pending_wb_flush_mask); DPU_REG_WRITE(&ctx->hw, CTL_FLUSH, ctx->pending_flush_mask); } @@ -248,6 +255,13 @@ static void dpu_hw_ctl_update_pending_flush_intf(struct dpu_hw_ctl *ctx, } } +static void dpu_hw_ctl_update_pending_flush_wb_v1(struct dpu_hw_ctl *ctx, + enum dpu_wb wb) +{ + ctx->pending_wb_flush_mask |= BIT(wb - WB_0); + ctx->pending_flush_mask |= BIT(WB_IDX); +} + static void dpu_hw_ctl_update_pending_flush_intf_v1(struct dpu_hw_ctl *ctx, enum dpu_intf intf) { @@ -493,10 +507,11 @@ static void dpu_hw_ctl_setup_blendstage(struct dpu_hw_ctl *ctx, static void dpu_hw_ctl_intf_cfg_v1(struct dpu_hw_ctl *ctx, - struct dpu_hw_intf_cfg *cfg) + struct dpu_hw_intf_cfg *cfg, bool is_wb) { struct dpu_hw_blk_reg_map *c = &ctx->hw; u32 intf_active = 0; + u32 wb_active = 0; u32 mode_sel = 0; /* CTL_TOP[31:28] carries group_id to collate CTL paths @@ -509,18 +524,25 @@ static void dpu_hw_ctl_intf_cfg_v1(struct dpu_hw_ctl *ctx, if (cfg->intf_mode_sel == DPU_CTL_MODE_SEL_CMD) mode_sel |= BIT(17); - intf_active = DPU_REG_READ(c, CTL_INTF_ACTIVE); - intf_active |= BIT(cfg->intf - INTF_0); + if (!is_wb) { + intf_active = DPU_REG_READ(c, CTL_INTF_ACTIVE); + intf_active |= BIT(cfg->intf - INTF_0); + } else { + wb_active = DPU_REG_READ(c, CTL_WB_ACTIVE); + wb_active = BIT(cfg->wb - WB_0); + } DPU_REG_WRITE(c, CTL_TOP, mode_sel); DPU_REG_WRITE(c, CTL_INTF_ACTIVE, intf_active); + DPU_REG_WRITE(c, CTL_WB_ACTIVE, wb_active); + if (cfg->merge_3d) DPU_REG_WRITE(c, CTL_MERGE_3D_ACTIVE, BIT(cfg->merge_3d - MERGE_3D_0)); } static void dpu_hw_ctl_intf_cfg(struct dpu_hw_ctl *ctx, - struct dpu_hw_intf_cfg *cfg) + struct dpu_hw_intf_cfg *cfg, bool is_wb) { struct dpu_hw_blk_reg_map *c = &ctx->hw; u32 intf_cfg = 0; @@ -532,6 +554,9 @@ static void dpu_hw_ctl_intf_cfg(struct dpu_hw_ctl *ctx, intf_cfg |= (cfg->mode_3d - 0x1) << 20; } + if (is_wb) + intf_cfg |= (cfg->wb & 0x3) + 2; + switch (cfg->intf_mode_sel) { case DPU_CTL_MODE_SEL_VID: intf_cfg &= ~BIT(17); @@ -549,6 +574,34 @@ static void dpu_hw_ctl_intf_cfg(struct dpu_hw_ctl *ctx, DPU_REG_WRITE(c, CTL_TOP, intf_cfg); } +static void dpu_hw_ctl_reset_intf_cfg_v1(struct dpu_hw_ctl *ctx, + struct dpu_hw_intf_cfg *cfg, bool is_wb) +{ + struct dpu_hw_blk_reg_map *c = &ctx->hw; + u32 intf_active = 0; + u32 wb_active = 0; + u32 merge3d_active = 0; + + if (cfg->merge_3d) { + merge3d_active = DPU_REG_READ(c, CTL_MERGE_3D_ACTIVE); + DPU_REG_WRITE(c, CTL_MERGE_3D_ACTIVE, + BIT(cfg->merge_3d - MERGE_3D_0)); + } + + dpu_hw_ctl_clear_all_blendstages(ctx); + + if (!is_wb) { + intf_active = DPU_REG_READ(c, CTL_INTF_ACTIVE); + intf_active &= ~BIT(cfg->intf - INTF_0); + DPU_REG_WRITE(c, CTL_INTF_ACTIVE, intf_active); + } else { + wb_active = DPU_REG_READ(c, CTL_WB_ACTIVE); + wb_active &= ~BIT(cfg->wb - WB_0); + DPU_REG_WRITE(c, CTL_WB_ACTIVE, wb_active); + } +} + + static void dpu_hw_ctl_set_fetch_pipe_active(struct dpu_hw_ctl *ctx, unsigned long *fetch_active) { @@ -572,10 +625,12 @@ static void _setup_ctl_ops(struct dpu_hw_ctl_ops *ops, if (cap & BIT(DPU_CTL_ACTIVE_CFG)) { ops->trigger_flush = dpu_hw_ctl_trigger_flush_v1; ops->setup_intf_cfg = dpu_hw_ctl_intf_cfg_v1; + ops->reset_intf_cfg = dpu_hw_ctl_reset_intf_cfg_v1; ops->update_pending_flush_intf = dpu_hw_ctl_update_pending_flush_intf_v1; ops->update_pending_flush_merge_3d = dpu_hw_ctl_update_pending_flush_merge_3d_v1; + ops->update_pending_flush_wb = dpu_hw_ctl_update_pending_flush_wb_v1; } else { ops->trigger_flush = dpu_hw_ctl_trigger_flush; ops->setup_intf_cfg = dpu_hw_ctl_intf_cfg; diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.h index 806c171..fb4baca 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.h @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0-only */ -/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. +/* Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. + * Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. */ #ifndef _DPU_HW_CTL_H @@ -43,6 +44,7 @@ struct dpu_hw_stage_cfg { */ struct dpu_hw_intf_cfg { enum dpu_intf intf; + enum dpu_wb wb; enum dpu_3d_blend_mode mode_3d; enum dpu_merge_3d merge_3d; enum dpu_ctl_mode_sel intf_mode_sel; @@ -93,6 +95,15 @@ struct dpu_hw_ctl_ops { u32 flushbits); /** + * OR in the given flushbits to the cached pending_(wb_)flush_mask + * No effect on hardware + * @ctx : ctl path ctx pointer + * @blk : writeback block index + */ + void (*update_pending_flush_wb)(struct dpu_hw_ctl *ctx, + enum dpu_wb blk); + + /** * OR in the given flushbits to the cached pending_(intf_)flush_mask * No effect on hardware * @ctx : ctl path ctx pointer @@ -127,9 +138,19 @@ struct dpu_hw_ctl_ops { * Setup ctl_path interface config * @ctx * @cfg : interface config structure pointer + * @is_wb : to indicate wb mode for programming the ctl path */ void (*setup_intf_cfg)(struct dpu_hw_ctl *ctx, - struct dpu_hw_intf_cfg *cfg); + struct dpu_hw_intf_cfg *cfg, bool is_wb); + + /** + * reset ctl_path interface config + * @ctx + * @cfg : interface config structure pointer + * @is_wb : to indicate wb mode for programming the ctl path + */ + void (*reset_intf_cfg)(struct dpu_hw_ctl *ctx, + struct dpu_hw_intf_cfg *cfg, bool is_wb); int (*reset)(struct dpu_hw_ctl *c); @@ -182,6 +203,7 @@ struct dpu_hw_ctl_ops { * @mixer_hw_caps: mixer hardware capabilities * @pending_flush_mask: storage for pending ctl_flush managed via ops * @pending_intf_flush_mask: pending INTF flush + * @pending_wb_flush_mask: pending WB flush * @ops: operation list */ struct dpu_hw_ctl { @@ -195,6 +217,7 @@ struct dpu_hw_ctl { const struct dpu_lm_cfg *mixer_hw_caps; u32 pending_flush_mask; u32 pending_intf_flush_mask; + u32 pending_wb_flush_mask; u32 pending_merge_3d_flush_mask; /* ops */ From patchwork Fri Feb 4 21:17:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abhinav Kumar X-Patchwork-Id: 539914 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBD99C43219 for ; Fri, 4 Feb 2022 21:19:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242739AbiBDVTD (ORCPT ); Fri, 4 Feb 2022 16:19:03 -0500 Received: from alexa-out-sd-01.qualcomm.com ([199.106.114.38]:57664 "EHLO alexa-out-sd-01.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243763AbiBDVRz (ORCPT ); Fri, 4 Feb 2022 16:17:55 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1644009475; x=1675545475; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=8VqePdW7QQ+UL43Ie3bgYrq4PCND3FJQmLZE8lTXUtM=; b=nQlf8Fn/8tdrRIx0Wa4tWCXAAJKymmTvoPiTT0hBL5c+JKhTpQclfzfR j1lKSMtdOOMkrELfMEpMv5pM5rykPoKaEfmnn/aWyWHkc41gUako3ONBZ tKZgKCHKDWruDwMn4oxuIMp7b/9ySKicOvZ/rRaTHWP1lmubkXmaWkFv4 I=; Received: from unknown (HELO ironmsg03-sd.qualcomm.com) ([10.53.140.143]) by alexa-out-sd-01.qualcomm.com with ESMTP; 04 Feb 2022 13:17:55 -0800 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg03-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2022 13:17:55 -0800 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:17:52 -0800 Received: from abhinavk-linux.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:17:52 -0800 From: Abhinav Kumar To: CC: Abhinav Kumar , , , , , , , , , , , Subject: [PATCH 05/12] drm/msm/dpu: add an API to reset the encoder related hw blocks Date: Fri, 4 Feb 2022 13:17:18 -0800 Message-ID: <1644009445-17320-6-git-send-email-quic_abhinavk@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> References: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add an API to reset the encoder related hw blocks to ensure a proper teardown of the pipeline. At the moment this is being used only for the writeback encoder but eventually we can start using this for all interfaces. Signed-off-by: Abhinav Kumar --- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 92 ++++++++++++++++++++++++ drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h | 10 +++ 2 files changed, 102 insertions(+) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index 1e648db..e977c05 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only /* + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. * Copyright (c) 2014-2018, 2020-2021 The Linux Foundation. All rights reserved. * Copyright (C) 2013 Red Hat * Author: Rob Clark @@ -21,6 +22,7 @@ #include "dpu_hw_intf.h" #include "dpu_hw_ctl.h" #include "dpu_hw_dspp.h" +#include "dpu_hw_merge3d.h" #include "dpu_formats.h" #include "dpu_encoder_phys.h" #include "dpu_crtc.h" @@ -1813,6 +1815,96 @@ void dpu_encoder_kickoff(struct drm_encoder *drm_enc) DPU_ATRACE_END("encoder_kickoff"); } +static void dpu_encoder_helper_reset_mixers(struct dpu_encoder_phys *phys_enc) +{ + struct dpu_hw_mixer_cfg mixer; + int i, num_lm; + u32 flush_mask = 0; + struct dpu_global_state *global_state; + struct dpu_hw_blk *hw_lm[2]; + struct dpu_hw_mixer *hw_mixer[2]; + struct dpu_hw_ctl *ctl = phys_enc->hw_ctl; + + memset(&mixer, 0, sizeof(mixer)); + + /* reset all mixers for this encoder */ + if (phys_enc->hw_ctl->ops.clear_all_blendstages) + phys_enc->hw_ctl->ops.clear_all_blendstages(phys_enc->hw_ctl); + + global_state = dpu_kms_get_existing_global_state(phys_enc->dpu_kms); + + num_lm = dpu_rm_get_assigned_resources(&phys_enc->dpu_kms->rm, global_state, + phys_enc->parent->base.id, DPU_HW_BLK_LM, hw_lm, ARRAY_SIZE(hw_lm)); + + for (i = 0; i < num_lm; i++) { + hw_mixer[i] = to_dpu_hw_mixer(hw_lm[i]); + flush_mask = phys_enc->hw_ctl->ops.get_bitmask_mixer(ctl, hw_mixer[i]->idx); + if (phys_enc->hw_ctl->ops.update_pending_flush) + phys_enc->hw_ctl->ops.update_pending_flush(ctl, flush_mask); + + /* clear all blendstages */ + if (phys_enc->hw_ctl->ops.setup_blendstage) + phys_enc->hw_ctl->ops.setup_blendstage(ctl, hw_mixer[i]->idx, NULL); + } +} + +void dpu_encoder_helper_phys_cleanup(struct dpu_encoder_phys *phys_enc) +{ + struct dpu_hw_ctl *ctl = phys_enc->hw_ctl; + struct dpu_hw_intf_cfg intf_cfg = { 0 }; + int i; + struct dpu_encoder_virt *dpu_enc; + + dpu_enc = to_dpu_encoder_virt(phys_enc->parent); + + phys_enc->hw_ctl->ops.reset(ctl); + + dpu_encoder_helper_reset_mixers(phys_enc); + + if (phys_enc->hw_wb) { + /* disable the PP block */ + if (phys_enc->hw_wb->ops.bind_pingpong_blk) + phys_enc->hw_wb->ops.bind_pingpong_blk(phys_enc->hw_wb, false, + phys_enc->hw_pp->idx); + + /* mark WB flush as pending */ + if (phys_enc->hw_ctl->ops.update_pending_flush_wb) + phys_enc->hw_ctl->ops.update_pending_flush_wb(ctl, phys_enc->hw_wb->idx); + } else { + for (i = 0; i < dpu_enc->num_phys_encs; i++) { + if (dpu_enc->phys_encs[i] && phys_enc->hw_intf->ops.bind_pingpong_blk) + phys_enc->hw_intf->ops.bind_pingpong_blk( + dpu_enc->phys_encs[i]->hw_intf, false, + dpu_enc->phys_encs[i]->hw_pp->idx); + /* mark INTF flush as pending */ + if (phys_enc->hw_ctl->ops.update_pending_flush_intf) + phys_enc->hw_ctl->ops.update_pending_flush_intf(phys_enc->hw_ctl, + dpu_enc->phys_encs[i]->hw_intf->idx); + } + } + + /* reset the merge 3D HW block */ + if (phys_enc->hw_pp->merge_3d) { + phys_enc->hw_pp->merge_3d->ops.setup_3d_mode(phys_enc->hw_pp->merge_3d, + BLEND_3D_NONE); + if (phys_enc->hw_ctl->ops.update_pending_flush_merge_3d) + phys_enc->hw_ctl->ops.update_pending_flush_merge_3d(ctl, + phys_enc->hw_pp->merge_3d->idx); + } + + intf_cfg.stream_sel = 0; /* Don't care value for video mode */ + intf_cfg.mode_3d = dpu_encoder_helper_get_3d_blend_mode(phys_enc); + if (phys_enc->hw_pp->merge_3d) + intf_cfg.merge_3d = phys_enc->hw_pp->merge_3d->idx; + + if (ctl->ops.reset_intf_cfg) + ctl->ops.reset_intf_cfg(ctl, &intf_cfg, true); + + ctl->ops.trigger_flush(ctl); + ctl->ops.trigger_start(ctl); + ctl->ops.clear_pending_flush(ctl); +} + void dpu_encoder_prepare_commit(struct drm_encoder *drm_enc) { struct dpu_encoder_virt *dpu_enc; diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h index e7270eb..07c3525 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. * Copyright (c) 2015-2018 The Linux Foundation. All rights reserved. */ @@ -10,6 +11,7 @@ #include "dpu_kms.h" #include "dpu_hw_intf.h" +#include "dpu_hw_wb.h" #include "dpu_hw_pingpong.h" #include "dpu_hw_ctl.h" #include "dpu_hw_top.h" @@ -189,6 +191,7 @@ struct dpu_encoder_irq { * @hw_ctl: Hardware interface to the ctl registers * @hw_pp: Hardware interface to the ping pong registers * @hw_intf: Hardware interface to the intf registers + * @hw_wb: Hardware interface to the wb registers * @dpu_kms: Pointer to the dpu_kms top level * @cached_mode: DRM mode cached at mode_set time, acted on in enable * @enabled: Whether the encoder has enabled and running a mode @@ -218,6 +221,7 @@ struct dpu_encoder_phys { struct dpu_hw_ctl *hw_ctl; struct dpu_hw_pingpong *hw_pp; struct dpu_hw_intf *hw_intf; + struct dpu_hw_wb *hw_wb; struct dpu_kms *dpu_kms; struct drm_display_mode cached_mode; enum dpu_enc_split_role split_role; @@ -382,4 +386,10 @@ int dpu_encoder_helper_register_irq(struct dpu_encoder_phys *phys_enc, int dpu_encoder_helper_unregister_irq(struct dpu_encoder_phys *phys_enc, enum dpu_intr_idx intr_idx); +/** + * dpu_encoder_helper_phys_cleanup - helper to cleanup dpu pipeline + * @phys_enc: Pointer to physical encoder structure + */ +void dpu_encoder_helper_phys_cleanup(struct dpu_encoder_phys *phys_enc); + #endif /* __dpu_encoder_phys_H__ */ From patchwork Fri Feb 4 21:17:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abhinav Kumar X-Patchwork-Id: 540088 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA0D7C4321E for ; Fri, 4 Feb 2022 21:19:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239335AbiBDVTC (ORCPT ); Fri, 4 Feb 2022 16:19:02 -0500 Received: from alexa-out.qualcomm.com ([129.46.98.28]:11203 "EHLO alexa-out.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243761AbiBDVR4 (ORCPT ); Fri, 4 Feb 2022 16:17:56 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1644009476; x=1675545476; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=QSMORbJXNxclB8ITNC6UPKDciw5CcSFsdN+sWFJjKAk=; b=RZ6vBX0IK+zRvA/qes50H2MHqUPslHI/y90iFh+koSivPf46Pqu3UXBX x4yVQ5oACG3qIM//eLSqPQdgzum96xzHitv4F+4Vv8H/mAZPOctE+h8w8 jr2hNi+iA3IwvV5Ct35Vzo4CHpZWaEOO9YUWPedJEsYUMvr3v/Eg6XQ3I U=; Received: from ironmsg08-lv.qualcomm.com ([10.47.202.152]) by alexa-out.qualcomm.com with ESMTP; 04 Feb 2022 13:17:56 -0800 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg08-lv.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2022 13:17:55 -0800 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:17:55 -0800 Received: from abhinavk-linux.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:17:54 -0800 From: Abhinav Kumar To: CC: Abhinav Kumar , , , , , , , , , , , Subject: [PATCH 06/12] drm/msm/dpu: make changes to dpu_encoder to support virtual encoder Date: Fri, 4 Feb 2022 13:17:19 -0800 Message-ID: <1644009445-17320-7-git-send-email-quic_abhinavk@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> References: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Make changes to dpu_encoder to support virtual encoder needed to support writeback for dpu. Signed-off-by: Abhinav Kumar --- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 57 +++++++++++++++++++++-------- 1 file changed, 42 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index e977c05..947069b 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -974,6 +974,7 @@ static void dpu_encoder_virt_mode_set(struct drm_encoder *drm_enc, struct dpu_hw_blk *hw_ctl[MAX_CHANNELS_PER_ENC]; struct dpu_hw_blk *hw_lm[MAX_CHANNELS_PER_ENC]; struct dpu_hw_blk *hw_dspp[MAX_CHANNELS_PER_ENC] = { NULL }; + enum dpu_hw_blk_type blk_type; int num_lm, num_ctl, num_pp; int i, j; @@ -1061,20 +1062,36 @@ static void dpu_encoder_virt_mode_set(struct drm_encoder *drm_enc, phys->hw_pp = dpu_enc->hw_pp[i]; phys->hw_ctl = to_dpu_hw_ctl(hw_ctl[i]); + if (phys->intf_mode == INTF_MODE_WB_LINE) + blk_type = DPU_HW_BLK_WB; + else + blk_type = DPU_HW_BLK_INTF; + num_blk = dpu_rm_get_assigned_resources(&dpu_kms->rm, - global_state, drm_enc->base.id, DPU_HW_BLK_INTF, + global_state, drm_enc->base.id, blk_type, hw_blk, ARRAY_SIZE(hw_blk)); - for (j = 0; j < num_blk; j++) { - struct dpu_hw_intf *hw_intf; - hw_intf = to_dpu_hw_intf(hw_blk[i]); - if (hw_intf->idx == phys->intf_idx) - phys->hw_intf = hw_intf; + if (blk_type == DPU_HW_BLK_WB) { + for (j = 0; j < num_blk; j++) { + struct dpu_hw_wb *hw_wb; + + hw_wb = to_dpu_hw_wb(hw_blk[i]); + if (hw_wb->idx == phys->intf_idx) + phys->hw_wb = hw_wb; + } + } else { + for (j = 0; j < num_blk; j++) { + struct dpu_hw_intf *hw_intf; + + hw_intf = to_dpu_hw_intf(hw_blk[i]); + if (hw_intf->idx == phys->intf_idx) + phys->hw_intf = hw_intf; + } } - if (!phys->hw_intf) { + if (!phys->hw_intf && !phys->hw_wb) { DPU_ERROR_ENC(dpu_enc, - "no intf block assigned at idx: %d\n", i); + "no intf or WB block assigned at idx: %d\n", i); return; } @@ -1224,15 +1241,22 @@ static void dpu_encoder_virt_disable(struct drm_encoder *drm_enc) mutex_unlock(&dpu_enc->enc_lock); } -static enum dpu_intf dpu_encoder_get_intf(struct dpu_mdss_cfg *catalog, +static enum dpu_intf dpu_encoder_get_intf_or_wb(struct dpu_mdss_cfg *catalog, enum dpu_intf_type type, u32 controller_id) { int i = 0; - for (i = 0; i < catalog->intf_count; i++) { - if (catalog->intf[i].type == type - && catalog->intf[i].controller_id == controller_id) { - return catalog->intf[i].id; + if (type != INTF_WB) { + for (i = 0; i < catalog->intf_count; i++) { + if (catalog->intf[i].type == type + && catalog->intf[i].controller_id == controller_id) { + return catalog->intf[i].id; + } + } + } else { + for (i = 0; i < catalog->wb_count; i++) { + if (catalog->wb[i].id == controller_id) + return catalog->wb[i].id; } } @@ -2096,6 +2120,9 @@ static int dpu_encoder_setup_display(struct dpu_encoder_virt *dpu_enc, case DRM_MODE_ENCODER_TMDS: intf_type = INTF_DP; break; + case DRM_MODE_ENCODER_VIRTUAL: + intf_type = INTF_WB; + break; } WARN_ON(disp_info->num_of_h_tiles < 1); @@ -2128,11 +2155,11 @@ static int dpu_encoder_setup_display(struct dpu_encoder_virt *dpu_enc, DPU_DEBUG("h_tile_instance %d = %d, split_role %d\n", i, controller_id, phys_params.split_role); - phys_params.intf_idx = dpu_encoder_get_intf(dpu_kms->catalog, + phys_params.intf_idx = dpu_encoder_get_intf_or_wb(dpu_kms->catalog, intf_type, controller_id); if (phys_params.intf_idx == INTF_MAX) { - DPU_ERROR_ENC(dpu_enc, "could not get intf: type %d, id %d\n", + DPU_ERROR_ENC(dpu_enc, "could not get intf or wb: type %d, id %d\n", intf_type, controller_id); ret = -EINVAL; } From patchwork Fri Feb 4 21:17:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abhinav Kumar X-Patchwork-Id: 539911 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49531C433EF for ; Fri, 4 Feb 2022 21:19:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234946AbiBDVTw (ORCPT ); Fri, 4 Feb 2022 16:19:52 -0500 Received: from alexa-out-sd-01.qualcomm.com ([199.106.114.38]:38681 "EHLO alexa-out-sd-01.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237265AbiBDVTL (ORCPT ); Fri, 4 Feb 2022 16:19:11 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1644009551; x=1675545551; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=QyAob+wCIGVXxnaF1MriLaaaVPvPOcpFYOrFXQ8LAk0=; b=WQISEvg4PKakQLRwfcXPHcKnpVfO0o0qQ5xar6+2GXKgX+LiR+QCrFEA 8GWa6ZS2iI3nhlV/vtDiRcnucGwPLMpNWKXc3zDMVXxGBTubIIul7UId9 maWivEMJ85jbye2BH9spAYvqXdmtYzJbjPe/tiFEzoR+GlPjpS0WOmaY1 4=; Received: from unknown (HELO ironmsg-SD-alpha.qualcomm.com) ([10.53.140.30]) by alexa-out-sd-01.qualcomm.com with ESMTP; 04 Feb 2022 13:17:57 -0800 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg-SD-alpha.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2022 13:17:57 -0800 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:17:57 -0800 Received: from abhinavk-linux.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:17:56 -0800 From: Abhinav Kumar To: CC: Abhinav Kumar , , , , , , , , , , , Subject: [PATCH 07/12] drm/msm/dpu: add encoder operations to prepare/cleanup wb job Date: Fri, 4 Feb 2022 13:17:20 -0800 Message-ID: <1644009445-17320-8-git-send-email-quic_abhinavk@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> References: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org add dpu encoder APIs to prepare and cleanup writeback job for the writeback encoder. These shall be invoked from the prepare_wb_job/cleanup_wb_job hooks of the drm_writeback framework. Signed-off-by: Abhinav Kumar --- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 34 ++++++++++++++++++++++++ drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h | 16 +++++++++++ drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h | 5 ++++ 3 files changed, 55 insertions(+) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index 947069b..b51a677 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -958,6 +958,40 @@ static int dpu_encoder_resource_control(struct drm_encoder *drm_enc, return 0; } +void dpu_encoder_prepare_wb_job(struct drm_encoder *drm_enc, + struct drm_writeback_job *job) +{ + struct dpu_encoder_virt *dpu_enc; + int i; + + dpu_enc = to_dpu_encoder_virt(drm_enc); + + for (i = 0; i < dpu_enc->num_phys_encs; i++) { + struct dpu_encoder_phys *phys = dpu_enc->phys_encs[i]; + + if (phys->ops.prepare_wb_job) + phys->ops.prepare_wb_job(phys, job); + + } +} + +void dpu_encoder_cleanup_wb_job(struct drm_encoder *drm_enc, + struct drm_writeback_job *job) +{ + struct dpu_encoder_virt *dpu_enc; + int i; + + dpu_enc = to_dpu_encoder_virt(drm_enc); + + for (i = 0; i < dpu_enc->num_phys_encs; i++) { + struct dpu_encoder_phys *phys = dpu_enc->phys_encs[i]; + + if (phys->ops.cleanup_wb_job) + phys->ops.cleanup_wb_job(phys, job); + + } +} + static void dpu_encoder_virt_mode_set(struct drm_encoder *drm_enc, struct drm_display_mode *mode, struct drm_display_mode *adj_mode) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h index cc10436..da5b6d6 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h @@ -171,4 +171,20 @@ int dpu_encoder_get_linecount(struct drm_encoder *drm_enc); */ int dpu_encoder_get_vsync_count(struct drm_encoder *drm_enc); +/** + * dpu_encoder_prepare_wb_job - prepare writeback job for the encoder. + * @drm_enc: Pointer to previously created drm encoder structure + * @job: Pointer to the current drm writeback job + */ +void dpu_encoder_prepare_wb_job(struct drm_encoder *drm_enc, + struct drm_writeback_job *job); + +/** + * dpu_encoder_cleanup_wb_job - cleanup writeback job for the encoder. + * @drm_enc: Pointer to previously created drm encoder structure + * @job: Pointer to the current drm writeback job + */ +void dpu_encoder_cleanup_wb_job(struct drm_encoder *drm_enc, + struct drm_writeback_job *job); + #endif /* __DPU_ENCODER_H__ */ diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h index 07c3525..7b3354d 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h @@ -7,6 +7,7 @@ #ifndef __DPU_ENCODER_PHYS_H__ #define __DPU_ENCODER_PHYS_H__ +#include #include #include "dpu_kms.h" @@ -146,6 +147,10 @@ struct dpu_encoder_phys_ops { void (*restore)(struct dpu_encoder_phys *phys); int (*get_line_count)(struct dpu_encoder_phys *phys); int (*get_frame_count)(struct dpu_encoder_phys *phys); + void (*prepare_wb_job)(struct dpu_encoder_phys *phys_enc, + struct drm_writeback_job *job); + void (*cleanup_wb_job)(struct dpu_encoder_phys *phys_enc, + struct drm_writeback_job *job); }; /** From patchwork Fri Feb 4 21:17:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abhinav Kumar X-Patchwork-Id: 539910 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7DD6C433F5 for ; Fri, 4 Feb 2022 21:20:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241128AbiBDVUN (ORCPT ); Fri, 4 Feb 2022 16:20:13 -0500 Received: from alexa-out-sd-01.qualcomm.com ([199.106.114.38]:57664 "EHLO alexa-out-sd-01.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241882AbiBDVTG (ORCPT ); Fri, 4 Feb 2022 16:19:06 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1644009540; x=1675545540; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=kNw+U0+cAjyEqWmaDkHcvPFnxnqXBvnF7ABOOkECV0k=; b=DoD1dQV/Zj+IA9KeR7EVOFgGeaYAPFHuaiaaEgkI9QKwxRAqMinlBsvn nBoSf/v0najWY9Epol7mgkRTef8HsR0afdf+G/zmLpAdq8V4jyM2yrqnh xy4nqnQis/tPmFMr/920mJ7hg2zi9Z+eAu1ZadP9G+3xTlhH2zLUM2s2u M=; Received: from unknown (HELO ironmsg-SD-alpha.qualcomm.com) ([10.53.140.30]) by alexa-out-sd-01.qualcomm.com with ESMTP; 04 Feb 2022 13:17:59 -0800 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg-SD-alpha.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2022 13:17:59 -0800 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:17:59 -0800 Received: from abhinavk-linux.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:17:58 -0800 From: Abhinav Kumar To: CC: Abhinav Kumar , , , , , , , , , , , Subject: [PATCH 08/12] drm/msm/dpu: introduce the dpu_encoder_phys_* for writeback Date: Fri, 4 Feb 2022 13:17:21 -0800 Message-ID: <1644009445-17320-9-git-send-email-quic_abhinavk@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> References: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Introduce the dpu_encoder_phys_* for the writeback interface to handle writeback specific hardware programming. Signed-off-by: Abhinav Kumar --- drivers/gpu/drm/msm/Makefile | 1 + drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h | 36 +- .../gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 813 +++++++++++++++++++++ 3 files changed, 849 insertions(+), 1 deletion(-) create mode 100644 drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile index c43ef35..3abaf84 100644 --- a/drivers/gpu/drm/msm/Makefile +++ b/drivers/gpu/drm/msm/Makefile @@ -53,6 +53,7 @@ msm-y := \ disp/dpu1/dpu_encoder.o \ disp/dpu1/dpu_encoder_phys_cmd.o \ disp/dpu1/dpu_encoder_phys_vid.o \ + disp/dpu1/dpu_encoder_phys_wb.o \ disp/dpu1/dpu_formats.o \ disp/dpu1/dpu_hw_catalog.o \ disp/dpu1/dpu_hw_ctl.o \ diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h index 7b3354d..80da0a9 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h @@ -159,6 +159,7 @@ struct dpu_encoder_phys_ops { * @INTR_IDX_PINGPONG: Pingpong done unterrupt for cmd mode panel * @INTR_IDX_UNDERRUN: Underrun unterrupt for video and cmd mode panel * @INTR_IDX_RDPTR: Readpointer done unterrupt for cmd mode panel + * @INTR_IDX_WB_DONE: Writeback fone interrupt for virtual connector */ enum dpu_intr_idx { INTR_IDX_VSYNC, @@ -166,6 +167,7 @@ enum dpu_intr_idx { INTR_IDX_UNDERRUN, INTR_IDX_CTL_START, INTR_IDX_RDPTR, + INTR_IDX_WB_DONE, INTR_IDX_MAX, }; @@ -196,7 +198,7 @@ struct dpu_encoder_irq { * @hw_ctl: Hardware interface to the ctl registers * @hw_pp: Hardware interface to the ping pong registers * @hw_intf: Hardware interface to the intf registers - * @hw_wb: Hardware interface to the wb registers + * @hw_wb: Hardware interface to the wb registers * @dpu_kms: Pointer to the dpu_kms top level * @cached_mode: DRM mode cached at mode_set time, acted on in enable * @enabled: Whether the encoder has enabled and running a mode @@ -250,6 +252,31 @@ static inline int dpu_encoder_phys_inc_pending(struct dpu_encoder_phys *phys) } /** + * struct dpu_encoder_phys_wb - sub-class of dpu_encoder_phys to handle command + * mode specific operations + * @base: Baseclass physical encoder structure + * @wbirq_refcount: Reference count of writeback interrupt + * @wb_done_timeout_cnt: number of wb done irq timeout errors + * @wb_cfg: writeback block config to store fb related details + * @cdp_cfg: chroma down prefetch block config for wb + * @aspace: address space to be used for wb framebuffer + * @wb_conn: backpointer to writeback connector + * @wb_job: backpointer to current writeback job + * @dest: dpu buffer layout for current writeback output buffer + */ +struct dpu_encoder_phys_wb { + struct dpu_encoder_phys base; + atomic_t wbirq_refcount; + int wb_done_timeout_cnt; + struct dpu_hw_wb_cfg wb_cfg; + struct dpu_hw_wb_cdp_cfg cdp_cfg; + struct msm_gem_address_space *aspace; + struct drm_writeback_connector *wb_conn; + struct drm_writeback_job *wb_job; + struct dpu_hw_fmt_layout dest; +}; + +/** * struct dpu_encoder_phys_cmd - sub-class of dpu_encoder_phys to handle command * mode specific operations * @base: Baseclass physical encoder structure @@ -317,6 +344,13 @@ struct dpu_encoder_phys *dpu_encoder_phys_cmd_init( struct dpu_enc_phys_init_params *p); /** + * dpu_encoder_phys_wb_init - initialize writeback encoder + * @init: Pointer to init info structure with initialization params + */ +struct dpu_encoder_phys *dpu_encoder_phys_wb_init( + struct dpu_enc_phys_init_params *p); + +/** * dpu_encoder_helper_trigger_start - control start helper function * This helper function may be optionally specified by physical * encoders if they require ctl_start triggering. diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c new file mode 100644 index 0000000..783f83e --- /dev/null +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c @@ -0,0 +1,813 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#define pr_fmt(fmt) "[drm:%s:%d] " fmt, __func__, __LINE__ + +#include + +#include "dpu_encoder_phys.h" +#include "dpu_formats.h" +#include "dpu_hw_top.h" +#include "dpu_hw_wb.h" +#include "dpu_hw_lm.h" +#include "dpu_hw_blk.h" +#include "dpu_hw_merge3d.h" +#include "dpu_hw_interrupts.h" +#include "dpu_core_irq.h" +#include "dpu_vbif.h" +#include "dpu_crtc.h" +#include "disp/msm_disp_snapshot.h" + +#define DEFAULT_MAX_WRITEBACK_WIDTH 2048 + +#define to_dpu_encoder_phys_wb(x) \ + container_of(x, struct dpu_encoder_phys_wb, base) + +/** + * dpu_encoder_phys_wb_is_master - report wb always as master encoder + */ +static bool dpu_encoder_phys_wb_is_master(struct dpu_encoder_phys *phys_enc) +{ + return true; +} + +/** + * dpu_encoder_phys_wb_set_ot_limit - set OT limit for writeback interface + * @phys_enc: Pointer to physical encoder + */ +static void dpu_encoder_phys_wb_set_ot_limit( + struct dpu_encoder_phys *phys_enc) +{ + struct dpu_hw_wb *hw_wb = phys_enc->hw_wb; + struct dpu_vbif_set_ot_params ot_params; + + memset(&ot_params, 0, sizeof(ot_params)); + ot_params.xin_id = hw_wb->caps->xin_id; + ot_params.num = hw_wb->idx - WB_0; + ot_params.width = phys_enc->cached_mode.hdisplay; + ot_params.height = phys_enc->cached_mode.vdisplay; + ot_params.is_wfd = true; + ot_params.frame_rate = drm_mode_vrefresh(&phys_enc->cached_mode); + ot_params.vbif_idx = hw_wb->caps->vbif_idx; + ot_params.clk_ctrl = hw_wb->caps->clk_ctrl; + ot_params.rd = false; + + dpu_vbif_set_ot_limit(phys_enc->dpu_kms, &ot_params); +} + +/** + * dpu_encoder_phys_wb_set_qos_remap - set QoS remapper for writeback + * @phys_enc: Pointer to physical encoder + */ +static void dpu_encoder_phys_wb_set_qos_remap( + struct dpu_encoder_phys *phys_enc) +{ + struct dpu_hw_wb *hw_wb; + struct dpu_vbif_set_qos_params qos_params; + + if (!phys_enc || !phys_enc->parent || !phys_enc->parent->crtc) { + DPU_ERROR("invalid arguments\n"); + return; + } + + if (!phys_enc->hw_wb || !phys_enc->hw_wb->caps) { + DPU_ERROR("invalid writeback hardware\n"); + return; + } + + hw_wb = phys_enc->hw_wb; + + memset(&qos_params, 0, sizeof(qos_params)); + qos_params.vbif_idx = hw_wb->caps->vbif_idx; + qos_params.xin_id = hw_wb->caps->xin_id; + qos_params.clk_ctrl = hw_wb->caps->clk_ctrl; + qos_params.num = hw_wb->idx - WB_0; + qos_params.is_rt = false; + + DPU_DEBUG("[qos_remap] wb:%d vbif:%d xin:%d is_rt:%d\n", + qos_params.num, + qos_params.vbif_idx, + qos_params.xin_id, qos_params.is_rt); + + dpu_vbif_set_qos_remap(phys_enc->dpu_kms, &qos_params); +} + +//this can be moved to some common file? +static u64 _dpu_encoder_phys_wb_get_qos_lut(struct dpu_qos_lut_tbl *tbl, + u32 total_fl) +{ + int i; + + if (!tbl || !tbl->nentry || !tbl->entries) + return 0; + + for (i = 0; i < tbl->nentry; i++) + if (total_fl <= tbl->entries[i].fl) + return tbl->entries[i].lut; + + /* if last fl is zero, use as default */ + if (!tbl->entries[i-1].fl) + return tbl->entries[i-1].lut; + + return 0; +} + +/** + * dpu_encoder_phys_wb_set_qos - set QoS/danger/safe LUTs for writeback + * @phys_enc: Pointer to physical encoder + */ +static void dpu_encoder_phys_wb_set_qos(struct dpu_encoder_phys *phys_enc) +{ + struct dpu_hw_wb *hw_wb; + struct dpu_hw_wb_qos_cfg qos_cfg; + struct dpu_mdss_cfg *catalog; + struct dpu_qos_lut_tbl *qos_lut_tb; + + if (!phys_enc || !phys_enc->dpu_kms || !phys_enc->dpu_kms->catalog) { + DPU_ERROR("invalid parameter(s)\n"); + return; + } + + catalog = phys_enc->dpu_kms->catalog; + + hw_wb = phys_enc->hw_wb; + + memset(&qos_cfg, 0, sizeof(struct dpu_hw_wb_qos_cfg)); + qos_cfg.danger_safe_en = true; + qos_cfg.danger_lut = + catalog->perf.danger_lut_tbl[DPU_QOS_LUT_USAGE_NRT]; + + qos_cfg.safe_lut = catalog->perf.safe_lut_tbl[DPU_QOS_LUT_USAGE_NRT]; + + qos_lut_tb = &catalog->perf.qos_lut_tbl[DPU_QOS_LUT_USAGE_NRT]; + qos_cfg.creq_lut = _dpu_encoder_phys_wb_get_qos_lut(qos_lut_tb, 0); + + if (hw_wb->ops.setup_qos_lut) + hw_wb->ops.setup_qos_lut(hw_wb, &qos_cfg); +} + +/** + * dpu_encoder_phys_wb_setup_fb - setup output framebuffer + * @phys_enc: Pointer to physical encoder + * @fb: Pointer to output framebuffer + * @wb_roi: Pointer to output region of interest + */ +static void dpu_encoder_phys_wb_setup_fb(struct dpu_encoder_phys *phys_enc, + struct drm_framebuffer *fb) +{ + struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); + struct dpu_hw_wb *hw_wb; + struct dpu_hw_wb_cfg *wb_cfg; + struct dpu_hw_wb_cdp_cfg *cdp_cfg; + + if (!phys_enc || !phys_enc->dpu_kms || !phys_enc->dpu_kms->catalog || + !phys_enc->connector) { + DPU_ERROR("invalid encoder\n"); + return; + } + + hw_wb = phys_enc->hw_wb; + wb_cfg = &wb_enc->wb_cfg; + cdp_cfg = &wb_enc->cdp_cfg; + + wb_cfg->intf_mode = phys_enc->intf_mode; + wb_cfg->roi.x1 = 0; + wb_cfg->roi.x2 = phys_enc->cached_mode.hdisplay; + wb_cfg->roi.y1 = 0; + wb_cfg->roi.y2 = phys_enc->cached_mode.vdisplay; + + if (hw_wb->ops.setup_roi) + hw_wb->ops.setup_roi(hw_wb, wb_cfg); + + if (hw_wb->ops.setup_outformat) + hw_wb->ops.setup_outformat(hw_wb, wb_cfg); + + if (hw_wb->ops.setup_cdp) { + memset(cdp_cfg, 0, sizeof(struct dpu_hw_wb_cdp_cfg)); + + cdp_cfg->enable = phys_enc->dpu_kms->catalog->perf.cdp_cfg + [DPU_PERF_CDP_USAGE_NRT].wr_enable; + cdp_cfg->ubwc_meta_enable = + DPU_FORMAT_IS_UBWC(wb_cfg->dest.format); + cdp_cfg->tile_amortize_enable = + DPU_FORMAT_IS_UBWC(wb_cfg->dest.format) || + DPU_FORMAT_IS_TILE(wb_cfg->dest.format); + cdp_cfg->preload_ahead = DPU_WB_CDP_PRELOAD_AHEAD_64; + + hw_wb->ops.setup_cdp(hw_wb, cdp_cfg); + } + + if (hw_wb->ops.setup_outaddress) + hw_wb->ops.setup_outaddress(hw_wb, wb_cfg); +} + +/** + * dpu_encoder_phys_wb_setup_cdp - setup chroma down prefetch block + * @phys_enc:Pointer to physical encoder + */ +static void dpu_encoder_phys_wb_setup_cdp(struct dpu_encoder_phys *phys_enc) +{ + struct dpu_hw_wb *hw_wb; + struct dpu_hw_ctl *ctl; + + if (!phys_enc) { + DPU_ERROR("invalid encoder\n"); + return; + } + + hw_wb = phys_enc->hw_wb; + ctl = phys_enc->hw_ctl; + + if (test_bit(DPU_CTL_ACTIVE_CFG, &ctl->caps->features) && + (phys_enc->hw_ctl && + phys_enc->hw_ctl->ops.setup_intf_cfg)) { + struct dpu_hw_intf_cfg intf_cfg = {0}; + struct dpu_hw_pingpong *hw_pp = phys_enc->hw_pp; + enum dpu_3d_blend_mode mode_3d; + + mode_3d = dpu_encoder_helper_get_3d_blend_mode(phys_enc); + + intf_cfg.intf = DPU_NONE; + intf_cfg.wb = hw_wb->idx; + + if (mode_3d && hw_pp && hw_pp->merge_3d) + intf_cfg.merge_3d = hw_pp->merge_3d->idx; + + if (phys_enc->hw_pp->merge_3d && phys_enc->hw_pp->merge_3d->ops.setup_3d_mode) + phys_enc->hw_pp->merge_3d->ops.setup_3d_mode(phys_enc->hw_pp->merge_3d, + mode_3d); + + /* setup which pp blk will connect to this wb */ + if (hw_pp && phys_enc->hw_wb->ops.bind_pingpong_blk) + phys_enc->hw_wb->ops.bind_pingpong_blk(phys_enc->hw_wb, true, + phys_enc->hw_pp->idx); + + phys_enc->hw_ctl->ops.setup_intf_cfg(phys_enc->hw_ctl, + &intf_cfg, true); + } else if (phys_enc->hw_ctl && phys_enc->hw_ctl->ops.setup_intf_cfg) { + struct dpu_hw_intf_cfg intf_cfg = {0}; + + intf_cfg.intf = DPU_NONE; + intf_cfg.wb = hw_wb->idx; + intf_cfg.mode_3d = + dpu_encoder_helper_get_3d_blend_mode(phys_enc); + phys_enc->hw_ctl->ops.setup_intf_cfg(phys_enc->hw_ctl, &intf_cfg, true); + } +} + +/** + * dpu_encoder_phys_wb_atomic_check - verify and fixup given atomic states + * @phys_enc: Pointer to physical encoder + * @crtc_state: Pointer to CRTC atomic state + * @conn_state: Pointer to connector atomic state + */ +static int dpu_encoder_phys_wb_atomic_check( + struct dpu_encoder_phys *phys_enc, + struct drm_crtc_state *crtc_state, + struct drm_connector_state *conn_state) +{ + struct drm_framebuffer *fb; + const struct drm_display_mode *mode; + + DPU_DEBUG("[atomic_check:%d, \"%s\",%d,%d]\n", + phys_enc->intf_idx, mode->name, mode->hdisplay, mode->vdisplay); + + if (!conn_state->writeback_job || !conn_state->writeback_job->fb) + return 0; + + fb = conn_state->writeback_job->fb; + mode = &crtc_state->mode; + + if (!conn_state || !conn_state->connector) { + DPU_ERROR("invalid connector state\n"); + return -EINVAL; + } else if (conn_state->connector->status != + connector_status_connected) { + DPU_ERROR("connector not connected %d\n", + conn_state->connector->status); + return -EINVAL; + } + + DPU_DEBUG("[fb_id:%u][fb:%u,%u]\n", fb->base.id, + fb->width, fb->height); + + if (fb->width != mode->hdisplay) { + DPU_ERROR("invalid fb w=%d, mode w=%d\n", fb->width, + mode->hdisplay); + return -EINVAL; + } else if (fb->height != mode->vdisplay) { + DPU_ERROR("invalid fb h=%d, mode h=%d\n", fb->height, + mode->vdisplay); + return -EINVAL; + } else if (fb->width > DEFAULT_MAX_WRITEBACK_WIDTH) { + DPU_ERROR("invalid fb w=%d, maxlinewidth=%u\n", + fb->width, DEFAULT_MAX_WRITEBACK_WIDTH); + return -EINVAL; + } + + return 0; +} + + +/** + * _dpu_encoder_phys_wb_update_flush - flush hardware update + * @phys_enc: Pointer to physical encoder + */ +static void _dpu_encoder_phys_wb_update_flush(struct dpu_encoder_phys *phys_enc) +{ + struct dpu_hw_wb *hw_wb; + struct dpu_hw_ctl *hw_ctl; + struct dpu_hw_pingpong *hw_pp; + u32 pending_flush = 0; + + if (!phys_enc) + return; + + hw_wb = phys_enc->hw_wb; + hw_pp = phys_enc->hw_pp; + hw_ctl = phys_enc->hw_ctl; + + DPU_DEBUG("[wb:%d]\n", hw_wb->idx - WB_0); + + if (!hw_ctl) { + DPU_DEBUG("[wb:%d] no ctl assigned\n", hw_wb->idx - WB_0); + return; + } + + if (hw_ctl->ops.update_pending_flush_wb) + hw_ctl->ops.update_pending_flush_wb(hw_ctl, hw_wb->idx); + + if (hw_ctl->ops.update_pending_flush_merge_3d && hw_pp && hw_pp->merge_3d) + hw_ctl->ops.update_pending_flush_merge_3d(hw_ctl, + hw_pp->merge_3d->idx); + + if (hw_ctl->ops.get_pending_flush) + pending_flush = hw_ctl->ops.get_pending_flush(hw_ctl); + + DPU_DEBUG("Pending flush mask for CTL_%d is 0x%x, WB %d\n", + hw_ctl->idx - CTL_0, pending_flush, + hw_wb->idx - WB_0); +} + +/** + * dpu_encoder_phys_wb_setup - setup writeback encoder + * @phys_enc: Pointer to physical encoder + */ +static void dpu_encoder_phys_wb_setup( + struct dpu_encoder_phys *phys_enc) +{ + struct dpu_hw_wb *hw_wb = phys_enc->hw_wb; + struct drm_display_mode mode = phys_enc->cached_mode; + struct drm_framebuffer *fb = NULL; + + DPU_DEBUG("[mode_set:%d, \"%s\",%d,%d]\n", + hw_wb->idx - WB_0, mode.name, + mode.hdisplay, mode.vdisplay); + + dpu_encoder_phys_wb_set_ot_limit(phys_enc); + + dpu_encoder_phys_wb_set_qos_remap(phys_enc); + + dpu_encoder_phys_wb_set_qos(phys_enc); + + dpu_encoder_phys_wb_setup_fb(phys_enc, fb); + + dpu_encoder_phys_wb_setup_cdp(phys_enc); + +} + +static void _dpu_encoder_phys_wb_frame_done_helper(void *arg) +{ + struct dpu_encoder_phys *phys_enc = arg; + struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); + + struct dpu_hw_wb *hw_wb = phys_enc->hw_wb; + unsigned long lock_flags; + u32 event = DPU_ENCODER_FRAME_EVENT_DONE; + + DPU_DEBUG("[wb:%d]\n", hw_wb->idx - WB_0); + + if (phys_enc->parent_ops->handle_frame_done) + phys_enc->parent_ops->handle_frame_done(phys_enc->parent, + phys_enc, event); + + if (phys_enc->parent_ops->handle_vblank_virt) + phys_enc->parent_ops->handle_vblank_virt(phys_enc->parent, + phys_enc); + + spin_lock_irqsave(phys_enc->enc_spinlock, lock_flags); + atomic_add_unless(&phys_enc->pending_kickoff_cnt, -1, 0); + spin_unlock_irqrestore(phys_enc->enc_spinlock, lock_flags); + + if (wb_enc->wb_conn) + drm_writeback_signal_completion(wb_enc->wb_conn, 0); + + /* Signal any waiting atomic commit thread */ + wake_up_all(&phys_enc->pending_kickoff_wq); +} + +/** + * dpu_encoder_phys_wb_done_irq - writeback interrupt handler + * @arg: Pointer to writeback encoder + * @irq_idx: interrupt index + */ +static void dpu_encoder_phys_wb_done_irq(void *arg, int irq_idx) +{ + _dpu_encoder_phys_wb_frame_done_helper(arg); +} + +/** + * dpu_encoder_phys_wb_irq_ctrl - irq control of WB + * @phys: Pointer to physical encoder + * @enable: indicates enable or disable interrupts + */ +static void dpu_encoder_phys_wb_irq_ctrl( + struct dpu_encoder_phys *phys, bool enable) +{ + + struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys); + int ret = 0; + int refcount; + + refcount = atomic_read(&wb_enc->wbirq_refcount); + + if (enable && atomic_inc_return(&wb_enc->wbirq_refcount) == 1) { + dpu_encoder_helper_register_irq(phys, INTR_IDX_WB_DONE); + if (ret) + atomic_dec_return(&wb_enc->wbirq_refcount); + } else if (!enable && + atomic_dec_return(&wb_enc->wbirq_refcount) == 0) { + dpu_encoder_helper_unregister_irq(phys, INTR_IDX_WB_DONE); + if (ret) + atomic_inc_return(&wb_enc->wbirq_refcount); + } +} + +/** + * dpu_encoder_phys_wb_mode_set - set display mode + * @phys_enc: Pointer to physical encoder + * @mode: Pointer to requested display mode + * @adj_mode: Pointer to adjusted display mode + */ +static void dpu_encoder_phys_wb_mode_set( + struct dpu_encoder_phys *phys_enc, + struct drm_display_mode *mode, + struct drm_display_mode *adj_mode) +{ + struct dpu_encoder_irq *irq; + + if (adj_mode) { + phys_enc->cached_mode = *adj_mode; + drm_mode_debug_printmodeline(adj_mode); + DPU_DEBUG("caching mode:\n"); + } + + irq = &phys_enc->irq[INTR_IDX_WB_DONE]; + irq->irq_idx = phys_enc->hw_wb->caps->intr_wb_done; +} + +static void _dpu_encoder_phys_wb_handle_wbdone_timeout( + struct dpu_encoder_phys *phys_enc) +{ + struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); + u32 frame_event = DPU_ENCODER_FRAME_EVENT_ERROR; + + wb_enc->wb_done_timeout_cnt++; + + if (wb_enc->wb_done_timeout_cnt == 1) + msm_disp_snapshot_state(phys_enc->parent->dev); + + atomic_add_unless(&phys_enc->pending_kickoff_cnt, -1, 0); + + /* request a ctl reset before the next kickoff */ + phys_enc->enable_state = DPU_ENC_ERR_NEEDS_HW_RESET; + + if (wb_enc->wb_conn) + drm_writeback_signal_completion(wb_enc->wb_conn, 0); + + if (phys_enc->parent_ops->handle_frame_done) + phys_enc->parent_ops->handle_frame_done( + phys_enc->parent, phys_enc, frame_event); +} + +/** + * dpu_encoder_phys_wb_wait_for_commit_done - wait until request is committed + * @phys_enc: Pointer to physical encoder + */ +static int dpu_encoder_phys_wb_wait_for_commit_done( + struct dpu_encoder_phys *phys_enc) +{ + unsigned long ret; + struct dpu_encoder_wait_info wait_info; + struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); + + wait_info.wq = &phys_enc->pending_kickoff_wq; + wait_info.atomic_cnt = &phys_enc->pending_kickoff_cnt; + wait_info.timeout_ms = KICKOFF_TIMEOUT_MS; + + ret = dpu_encoder_helper_wait_for_irq(phys_enc, INTR_IDX_WB_DONE, + &wait_info); + if (ret == -ETIMEDOUT) + _dpu_encoder_phys_wb_handle_wbdone_timeout(phys_enc); + else if (!ret) + wb_enc->wb_done_timeout_cnt = 0; + + return ret; +} + +/** + * dpu_encoder_phys_wb_prepare_for_kickoff - pre-kickoff processing + * @phys_enc: Pointer to physical encoder + * Returns: Zero on success + */ +static void dpu_encoder_phys_wb_prepare_for_kickoff( + struct dpu_encoder_phys *phys_enc) +{ + struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); + struct drm_connector *drm_conn; + struct drm_connector_state *state; + + DPU_DEBUG("[wb:%d]\n", phys_enc->hw_wb->idx - WB_0); + + if (!wb_enc->wb_conn || !wb_enc->wb_job) { + DPU_ERROR("invalid wb_conn or wb_job\n"); + return; + } + + drm_conn = wb_enc->wb_conn->base; + state = drm_conn->state; + + if (wb_enc->wb_conn && wb_enc->wb_job) + drm_writeback_queue_job(wb_enc->wb_conn, state); + + dpu_encoder_phys_wb_setup(phys_enc); + + _dpu_encoder_phys_wb_update_flush(phys_enc); +} + +/** + * dpu_encoder_phys_wb_needs_single_flush - trigger flush processing + * @phys_enc: Pointer to physical encoder + */ +static bool dpu_encoder_phys_wb_needs_single_flush(struct dpu_encoder_phys *phys_enc) +{ + DPU_DEBUG("[wb:%d]\n", phys_enc->hw_wb->idx - WB_0); + return false; +} + +/** + * dpu_encoder_phys_wb_handle_post_kickoff - post-kickoff processing + * @phys_enc: Pointer to physical encoder + */ +static void dpu_encoder_phys_wb_handle_post_kickoff( + struct dpu_encoder_phys *phys_enc) +{ + DPU_DEBUG("[wb:%d]\n", phys_enc->hw_wb->idx - WB_0); + +} + +/** + * dpu_encoder_phys_wb_enable - enable writeback encoder + * @phys_enc: Pointer to physical encoder + */ +static void dpu_encoder_phys_wb_enable(struct dpu_encoder_phys *phys_enc) +{ + DPU_DEBUG("[wb:%d]\n", phys_enc->hw_wb->idx - WB_0); + phys_enc->enable_state = DPU_ENC_ENABLED; +} +/** + * dpu_encoder_phys_wb_disable - disable writeback encoder + * @phys_enc: Pointer to physical encoder + */ +static void dpu_encoder_phys_wb_disable(struct dpu_encoder_phys *phys_enc) +{ + struct dpu_hw_wb *hw_wb = phys_enc->hw_wb; + + DPU_DEBUG("[wb:%d]\n", hw_wb->idx - WB_0); + + if (phys_enc->enable_state == DPU_ENC_DISABLED) { + DPU_ERROR("encoder is already disabled\n"); + return; + } + + /* reset h/w before final flush */ + if (phys_enc->hw_ctl->ops.clear_pending_flush) + phys_enc->hw_ctl->ops.clear_pending_flush(phys_enc->hw_ctl); + + /* + * New CTL reset sequence from 5.0 MDP onwards. + * If has_3d_merge_reset is not set, legacy reset + * sequence is executed. + * + * Legacy reset sequence has not been implemented yet. + * Any target earlier than SM8150 will need it and when + * WB support is added to those targets will need to add + * the legacy teardown sequence as well. + */ + if (phys_enc->hw_pp->merge_3d) + dpu_encoder_helper_phys_cleanup(phys_enc); + + phys_enc->enable_state = DPU_ENC_DISABLED; +} + +/** + * dpu_encoder_phys_wb_get_hw_resources - get hardware resources + * @phys_enc: Pointer to physical encoder + * @hw_res: Pointer to encoder resources + */ +static void dpu_encoder_phys_wb_get_hw_resources( + struct dpu_encoder_phys *phys_enc, + struct dpu_encoder_hw_resources *hw_res) +{ + if (!phys_enc) { + DPU_ERROR("invalid encoder\n"); + return; + } + + hw_res->wbs[phys_enc->intf_idx - WB_0] = INTF_MODE_WB_LINE; + DPU_DEBUG("[wb:%d] intf_mode=%d\n", phys_enc->intf_idx - WB_0, + hw_res->wbs[phys_enc->intf_idx - WB_0]); +} + +/** + * dpu_encoder_phys_wb_destroy - destroy writeback encoder + * @phys_enc: Pointer to physical encoder + */ +static void dpu_encoder_phys_wb_destroy(struct dpu_encoder_phys *phys_enc) +{ + DPU_DEBUG("[wb:%d]\n", phys_enc->intf_idx - INTF_0); + + if (!phys_enc) + return; + + kfree(phys_enc); +} + +static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc, + struct drm_writeback_job *job) +{ + const struct msm_format *format; + struct dpu_hw_wb_cfg *wb_cfg; + int ret; + struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); + + if (!job->fb) + return; + + wb_enc->wb_job = job; + wb_enc->wb_conn = job->connector; + wb_enc->aspace = phys_enc->dpu_kms->base.aspace; + + wb_cfg = &wb_enc->wb_cfg; + + memset(wb_cfg, 0, sizeof(struct dpu_hw_wb_cfg)); + + ret = msm_framebuffer_prepare(job->fb, wb_enc->aspace); + if (ret) { + DPU_ERROR("prep fb failed, %d\n", ret); + return; + } + + format = msm_framebuffer_format(job->fb); + + wb_cfg->dest.format = dpu_get_dpu_format_ext( + format->pixel_format, job->fb->modifier); + if (!wb_cfg->dest.format) { + /* this error should be detected during atomic_check */ + DPU_ERROR("failed to get format %x\n", format->pixel_format); + return; + } + + ret = dpu_format_populate_layout(wb_enc->aspace, job->fb, &wb_cfg->dest); + if (ret) { + DPU_DEBUG("failed to populate layout %d\n", ret); + return; + } + + wb_cfg->dest.width = job->fb->width; + wb_cfg->dest.height = job->fb->height; + wb_cfg->dest.num_planes = wb_cfg->dest.format->num_planes; + + if ((wb_cfg->dest.format->fetch_planes == DPU_PLANE_PLANAR) && + (wb_cfg->dest.format->element[0] == C1_B_Cb)) + swap(wb_cfg->dest.plane_addr[1], wb_cfg->dest.plane_addr[2]); + + DPU_DEBUG("[fb_offset:%8.8x,%8.8x,%8.8x,%8.8x]\n", + wb_cfg->dest.plane_addr[0], wb_cfg->dest.plane_addr[1], + wb_cfg->dest.plane_addr[2], wb_cfg->dest.plane_addr[3]); + + DPU_DEBUG("[fb_stride:%8.8x,%8.8x,%8.8x,%8.8x]\n", + wb_cfg->dest.plane_pitch[0], wb_cfg->dest.plane_pitch[1], + wb_cfg->dest.plane_pitch[2], wb_cfg->dest.plane_pitch[3]); +} + +static void dpu_encoder_phys_wb_cleanup_wb_job(struct dpu_encoder_phys *phys_enc, + struct drm_writeback_job *job) +{ + struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); + + if (!job->fb) + return; + + msm_framebuffer_cleanup(job->fb, wb_enc->aspace); + // revisit this after everything else works + wb_enc->wb_job = NULL; + wb_enc->wb_conn = NULL; +} + +/** + * dpu_encoder_phys_wb_init_ops - initialize writeback operations + * @ops: Pointer to encoder operation table + */ +static void dpu_encoder_phys_wb_init_ops(struct dpu_encoder_phys_ops *ops) +{ + ops->is_master = dpu_encoder_phys_wb_is_master; + ops->mode_set = dpu_encoder_phys_wb_mode_set; + ops->enable = dpu_encoder_phys_wb_enable; + ops->disable = dpu_encoder_phys_wb_disable; + ops->destroy = dpu_encoder_phys_wb_destroy; + ops->atomic_check = dpu_encoder_phys_wb_atomic_check; + ops->get_hw_resources = dpu_encoder_phys_wb_get_hw_resources; + ops->wait_for_commit_done = dpu_encoder_phys_wb_wait_for_commit_done; + ops->prepare_for_kickoff = dpu_encoder_phys_wb_prepare_for_kickoff; + ops->handle_post_kickoff = dpu_encoder_phys_wb_handle_post_kickoff; + ops->needs_single_flush = dpu_encoder_phys_wb_needs_single_flush; + ops->trigger_start = dpu_encoder_helper_trigger_start; + ops->prepare_wb_job = dpu_encoder_phys_wb_prepare_wb_job; + ops->cleanup_wb_job = dpu_encoder_phys_wb_cleanup_wb_job; + ops->irq_control = dpu_encoder_phys_wb_irq_ctrl; +} + +/** + * dpu_encoder_phys_wb_init - initialize writeback encoder + * @init: Pointer to init info structure with initialization params + */ +struct dpu_encoder_phys *dpu_encoder_phys_wb_init( + struct dpu_enc_phys_init_params *p) +{ + struct dpu_encoder_phys *phys_enc = NULL; + struct dpu_encoder_phys_wb *wb_enc = NULL; + + struct dpu_encoder_irq *irq; + int ret = 0; + int i; + + DPU_DEBUG("\n"); + + if (!p || !p->parent) { + DPU_ERROR("invalid params\n"); + ret = -EINVAL; + goto fail_alloc; + } + + wb_enc = kzalloc(sizeof(*wb_enc), GFP_KERNEL); + if (!wb_enc) { + DPU_ERROR("failed to allocate wb phys_enc enc\n"); + ret = -ENOMEM; + goto fail_alloc; + } + + phys_enc = &wb_enc->base; + phys_enc->hw_mdptop = p->dpu_kms->hw_mdp; + phys_enc->intf_idx = p->intf_idx; + + dpu_encoder_phys_wb_init_ops(&phys_enc->ops); + phys_enc->parent = p->parent; + phys_enc->parent_ops = p->parent_ops; + phys_enc->dpu_kms = p->dpu_kms; + phys_enc->split_role = p->split_role; + phys_enc->intf_mode = INTF_MODE_WB_LINE; + phys_enc->intf_idx = p->intf_idx; + phys_enc->enc_spinlock = p->enc_spinlock; + + atomic_set(&wb_enc->wbirq_refcount, 0); + + for (i = 0; i < INTR_IDX_MAX; i++) { + irq = &phys_enc->irq[i]; + INIT_LIST_HEAD(&irq->cb.list); + irq->irq_idx = -EINVAL; + irq->cb.arg = phys_enc; + } + + irq = &phys_enc->irq[INTR_IDX_WB_DONE]; + irq->name = "wb_done"; + irq->intr_idx = INTR_IDX_WB_DONE; + irq->cb.func = dpu_encoder_phys_wb_done_irq; + + atomic_set(&phys_enc->pending_kickoff_cnt, 0); + atomic_set(&phys_enc->vblank_refcount, 0); + wb_enc->wb_done_timeout_cnt = 0; + + init_waitqueue_head(&phys_enc->pending_kickoff_wq); + phys_enc->enable_state = DPU_ENC_DISABLED; + + DPU_DEBUG("Created dpu_encoder_phys for wb %d\n", + phys_enc->intf_idx); + + return phys_enc; + +fail_alloc: + return ERR_PTR(ret); +} From patchwork Fri Feb 4 21:17:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abhinav Kumar X-Patchwork-Id: 540089 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 828F1C433F5 for ; Fri, 4 Feb 2022 21:19:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244807AbiBDVS5 (ORCPT ); Fri, 4 Feb 2022 16:18:57 -0500 Received: from alexa-out.qualcomm.com ([129.46.98.28]:10055 "EHLO alexa-out.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348880AbiBDVSB (ORCPT ); Fri, 4 Feb 2022 16:18:01 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1644009481; x=1675545481; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=mdXVrRClczTlrvvUfxixMENAybwz0givakdQipRAfjA=; b=MUnkqv9fEtUnyTT7cLxuwVqo6q1bkcRe3+yIEAN0xJ0e7ABe1GR91flN p6uYJ1SWQXOHOzEJWcYb5kKVnGuS6eWY/5CwO0cxmSK7Z2D81ahd7M8l9 ciMbgOR4AQ0LBLjzHArf4E/IsnYJoeWpUM/LbA2ydqOE3OXIBXrxIwVp7 c=; Received: from ironmsg09-lv.qualcomm.com ([10.47.202.153]) by alexa-out.qualcomm.com with ESMTP; 04 Feb 2022 13:18:01 -0800 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg09-lv.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2022 13:18:01 -0800 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:18:00 -0800 Received: from abhinavk-linux.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:18:00 -0800 From: Abhinav Kumar To: CC: Abhinav Kumar , , , , , , , , , , , Subject: [PATCH 09/12] drm/msm/dpu: add the writeback connector layer Date: Fri, 4 Feb 2022 13:17:22 -0800 Message-ID: <1644009445-17320-10-git-send-email-quic_abhinavk@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> References: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Introduce the dpu_writeback module which serves as the interface between dpu operations and the drm_writeback. This module manages the connector related operations for dpu writeback. Signed-off-by: Abhinav Kumar --- drivers/gpu/drm/msm/Makefile | 1 + drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c | 71 +++++++++++++++++++++++++++ drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.h | 27 ++++++++++ 3 files changed, 99 insertions(+) create mode 100644 drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c create mode 100644 drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.h diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile index 3abaf84..05a8515 100644 --- a/drivers/gpu/drm/msm/Makefile +++ b/drivers/gpu/drm/msm/Makefile @@ -74,6 +74,7 @@ msm-y := \ disp/dpu1/dpu_plane.o \ disp/dpu1/dpu_rm.o \ disp/dpu1/dpu_vbif.o \ + disp/dpu1/dpu_writeback.o \ disp/msm_disp_snapshot.o \ disp/msm_disp_snapshot_util.o \ msm_atomic.o \ diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c new file mode 100644 index 0000000..7b61fad --- /dev/null +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c @@ -0,0 +1,71 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include "dpu_writeback.h" + +static int dpu_wb_conn_get_modes(struct drm_connector *connector) +{ + struct drm_device *dev = connector->dev; + + return drm_add_modes_noedid(connector, dev->mode_config.max_width, + dev->mode_config.max_height); +} + +static const struct drm_connector_funcs dpu_wb_conn_funcs = { + .reset = drm_atomic_helper_connector_reset, + .fill_modes = drm_helper_probe_single_connector_modes, + .destroy = drm_connector_cleanup, + .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, + .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, +}; + +static int dpu_wb_conn_prepare_job(struct drm_writeback_connector *connector, + struct drm_writeback_job *job) +{ + if (!job->fb) + return 0; + + dpu_encoder_prepare_wb_job(connector->encoder, job); + + return 0; +} + +static void dpu_wb_conn_cleanup_job(struct drm_writeback_connector *connector, + struct drm_writeback_job *job) +{ + if (!job->fb) + return; + + dpu_encoder_cleanup_wb_job(connector->encoder, job); +} + +static const struct drm_connector_helper_funcs dpu_wb_conn_helper_funcs = { + .get_modes = dpu_wb_conn_get_modes, + .prepare_writeback_job = dpu_wb_conn_prepare_job, + .cleanup_writeback_job = dpu_wb_conn_cleanup_job, +}; + +int dpu_writeback_init(struct drm_device *dev, struct drm_encoder *enc, + const struct drm_encoder_helper_funcs *enc_helper_funcs, const u32 *format_list, + u32 num_formats) +{ + struct msm_drm_private *priv = dev->dev_private; + struct dpu_wb_connector *dpu_wb_conn; + int rc = 0; + + dpu_wb_conn = devm_kzalloc(dev->dev, sizeof(*dpu_wb_conn), GFP_KERNEL); + dpu_wb_conn->base.base = &dpu_wb_conn->connector; + dpu_wb_conn->base.encoder = enc; + + drm_connector_helper_add(dpu_wb_conn->base.base, &dpu_wb_conn_helper_funcs); + + rc = drm_writeback_connector_init(dev, &dpu_wb_conn->base, + &dpu_wb_conn_funcs, enc_helper_funcs, + format_list, num_formats); + + priv->connectors[priv->num_connectors++] = dpu_wb_conn->base.base; + + return rc; +} diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.h new file mode 100644 index 0000000..206ce5e --- /dev/null +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _DPU_WRITEBACK_H +#define _DPU_WRITEBACK_H + +#include +#include +#include +#include + +#include "msm_drv.h" +#include "dpu_kms.h" +#include "dpu_encoder_phys.h" + +struct dpu_wb_connector { + struct drm_connector connector; + struct drm_writeback_connector base; +}; + +int dpu_writeback_init(struct drm_device *dev, struct drm_encoder *enc, + const struct drm_encoder_helper_funcs *enc_helper_funcs, + const u32 *format_list, u32 num_formats); + +#endif /*_DPU_WRITEBACK_H */ From patchwork Fri Feb 4 21:17:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abhinav Kumar X-Patchwork-Id: 539912 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 611A1C433F5 for ; Fri, 4 Feb 2022 21:19:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242562AbiBDVTm (ORCPT ); Fri, 4 Feb 2022 16:19:42 -0500 Received: from alexa-out-sd-01.qualcomm.com ([199.106.114.38]:57664 "EHLO alexa-out-sd-01.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241779AbiBDVTb (ORCPT ); Fri, 4 Feb 2022 16:19:31 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1644009571; x=1675545571; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=n1xVUNDOHmq0oLGLcvjb2F/t/eFRcX9+JI9/b4KU46c=; b=vCPPe2MKhsuVnJcSmenR0Xp89Jqazyg3Qt5pURiHZfE3eEkouwxACmIp p8dfReADXN4h9BpPXY1a3V+8pKqRqOzBZ0vxDULzXfAZ8fQ6NrPHPQzWQ GNKkLg4x0sy1HOIeSCgDVmMm6fhs+G+ysJrObTqeOu+GdF8LjW8DpEf4u E=; Received: from unknown (HELO ironmsg-SD-alpha.qualcomm.com) ([10.53.140.30]) by alexa-out-sd-01.qualcomm.com with ESMTP; 04 Feb 2022 13:18:03 -0800 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg-SD-alpha.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2022 13:18:02 -0800 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:18:02 -0800 Received: from abhinavk-linux.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:18:02 -0800 From: Abhinav Kumar To: CC: Abhinav Kumar , , , , , , , , , , , Subject: [PATCH 10/12] drm/msm/dpu: initialize dpu encoder and connector for writeback Date: Fri, 4 Feb 2022 13:17:23 -0800 Message-ID: <1644009445-17320-11-git-send-email-quic_abhinavk@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> References: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Initialize dpu encoder and connector for writeback if the target supports it in the catalog. Signed-off-by: Abhinav Kumar --- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 37 ++++++++++++----- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 62 +++++++++++++++++++++++++++++ 2 files changed, 88 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index b51a677..3746432 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -2066,7 +2066,7 @@ static void dpu_encoder_early_unregister(struct drm_encoder *encoder) } static int dpu_encoder_virt_add_phys_encs( - u32 display_caps, + struct msm_display_info *disp_info, struct dpu_encoder_virt *dpu_enc, struct dpu_enc_phys_init_params *params) { @@ -2085,7 +2085,7 @@ static int dpu_encoder_virt_add_phys_encs( return -EINVAL; } - if (display_caps & MSM_DISPLAY_CAP_VID_MODE) { + if (disp_info->capabilities & MSM_DISPLAY_CAP_VID_MODE) { enc = dpu_encoder_phys_vid_init(params); if (IS_ERR_OR_NULL(enc)) { @@ -2098,7 +2098,7 @@ static int dpu_encoder_virt_add_phys_encs( ++dpu_enc->num_phys_encs; } - if (display_caps & MSM_DISPLAY_CAP_CMD_MODE) { + if (disp_info->capabilities & MSM_DISPLAY_CAP_CMD_MODE) { enc = dpu_encoder_phys_cmd_init(params); if (IS_ERR_OR_NULL(enc)) { @@ -2111,6 +2111,19 @@ static int dpu_encoder_virt_add_phys_encs( ++dpu_enc->num_phys_encs; } + if (disp_info->intf_type == DRM_MODE_ENCODER_VIRTUAL) { + enc = dpu_encoder_phys_wb_init(params); + + if (IS_ERR_OR_NULL(enc)) { + DPU_ERROR_ENC(dpu_enc, "failed to init wb enc: %ld\n", + PTR_ERR(enc)); + return enc == NULL ? -EINVAL : PTR_ERR(enc); + } + + dpu_enc->phys_encs[dpu_enc->num_phys_encs] = enc; + ++dpu_enc->num_phys_encs; + } + if (params->split_role == ENC_ROLE_SLAVE) dpu_enc->cur_slave = enc; else @@ -2199,9 +2212,8 @@ static int dpu_encoder_setup_display(struct dpu_encoder_virt *dpu_enc, } if (!ret) { - ret = dpu_encoder_virt_add_phys_encs(disp_info->capabilities, - dpu_enc, - &phys_params); + ret = dpu_encoder_virt_add_phys_encs(disp_info, + dpu_enc, &phys_params); if (ret) DPU_ERROR_ENC(dpu_enc, "failed to add phys encs\n"); } @@ -2317,11 +2329,14 @@ struct drm_encoder *dpu_encoder_init(struct drm_device *dev, if (!dpu_enc) return ERR_PTR(-ENOMEM); - rc = drm_encoder_init(dev, &dpu_enc->base, &dpu_encoder_funcs, - drm_enc_mode, NULL); - if (rc) { - devm_kfree(dev->dev, dpu_enc); - return ERR_PTR(rc); + /* this is handled by drm_writeback_connector_init for virtual encoder */ + if (drm_enc_mode != DRM_MODE_ENCODER_VIRTUAL) { + rc = drm_encoder_init(dev, &dpu_enc->base, &dpu_encoder_funcs, + drm_enc_mode, NULL); + if (rc) { + devm_kfree(dev->dev, dpu_enc); + return ERR_PTR(rc); + } } drm_encoder_helper_add(&dpu_enc->base, &dpu_encoder_helper_funcs); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c index 47fe11a..6327ba9 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only /* + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. * Copyright (c) 2014-2018, The Linux Foundation. All rights reserved. * Copyright (C) 2013 Red Hat * Author: Rob Clark @@ -15,6 +16,7 @@ #include #include #include +#include #include "msm_drv.h" #include "msm_mmu.h" @@ -29,6 +31,7 @@ #include "dpu_kms.h" #include "dpu_plane.h" #include "dpu_vbif.h" +#include "dpu_writeback.h" #define CREATE_TRACE_POINTS #include "dpu_trace.h" @@ -642,6 +645,56 @@ static int _dpu_kms_initialize_displayport(struct drm_device *dev, return 0; } +static int _dpu_kms_initialize_writeback(struct drm_device *dev, + struct msm_drm_private *priv, struct dpu_kms *dpu_kms) +{ + struct drm_encoder *encoder = NULL; + struct msm_display_info info; + int rc, i; + const u32 *wb_formats; + int n_formats; + + encoder = dpu_encoder_init(dev, DRM_MODE_ENCODER_VIRTUAL); + if (IS_ERR(encoder)) { + DPU_ERROR("encoder init failed for dsi display\n"); + return PTR_ERR(encoder); + } + + memset(&info, 0, sizeof(info)); + + for (i = 0; i < dpu_kms->catalog->wb_count; i++) { + if (dpu_kms->catalog->wb[i].id == WB_2) { + wb_formats = dpu_kms->catalog->wb[i].format_list; + n_formats = dpu_kms->catalog->wb[i].num_formats; + } + } + + rc = dpu_writeback_init(dev, encoder, encoder->helper_private, wb_formats, + n_formats); + if (rc) { + DPU_ERROR("dpu_writeback_init, rc = %d\n", rc); + drm_encoder_cleanup(encoder); + return rc; + } + + priv->encoders[priv->num_encoders++] = encoder; + + info.num_of_h_tiles = 1; + /* use only WB idx 2 instance for DPU */ + info.h_tile_instance[0] = WB_2; + info.capabilities = MSM_DISPLAY_CAP_HOT_PLUG | MSM_DISPLAY_CAP_EDID; + info.intf_type = encoder->encoder_type; + + rc = dpu_encoder_setup(dev, encoder, &info); + if (rc) { + DPU_ERROR("failed to setup DPU encoder %d: rc:%d\n", + encoder->base.id, rc); + return rc; + } + + return 0; +} + /** * _dpu_kms_setup_displays - create encoders, bridges and connectors * for underlying displays @@ -668,6 +721,15 @@ static int _dpu_kms_setup_displays(struct drm_device *dev, return rc; } + /* Since WB isn't a driver check the catalog before initializing */ + if (dpu_kms->catalog->wb_count) { + rc = _dpu_kms_initialize_writeback(dev, priv, dpu_kms); + if (rc) { + DPU_ERROR("initialize_WB failed, rc = %d\n", rc); + return rc; + } + } + return rc; } From patchwork Fri Feb 4 21:17:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abhinav Kumar X-Patchwork-Id: 540085 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC411C433EF for ; Fri, 4 Feb 2022 21:19:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243096AbiBDVTn (ORCPT ); Fri, 4 Feb 2022 16:19:43 -0500 Received: from alexa-out-sd-01.qualcomm.com ([199.106.114.38]:34876 "EHLO alexa-out-sd-01.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243659AbiBDVTc (ORCPT ); Fri, 4 Feb 2022 16:19:32 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1644009572; x=1675545572; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=jhZ7iwY8GHH2dmriMOdF3RCraXP7TbEIGaAbozAEU8Q=; b=m2ifxH/2Gm4ar9SJrA+Fh6z1si8GQ3f6s/UM8zUJeZnxSRnRd3uZbmHo wnFpYthaQA4zCkToxKV27yEfPlPTYF9PdC8iRSXPKl8u68vMG/RJIu/rE 6moO3yiI/zLtaxX5H3AEwTE1QqKkAUwwlzdYcySJbLoho0pi23+rn/cD3 o=; Received: from unknown (HELO ironmsg01-sd.qualcomm.com) ([10.53.140.141]) by alexa-out-sd-01.qualcomm.com with ESMTP; 04 Feb 2022 13:18:05 -0800 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg01-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2022 13:18:05 -0800 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:18:04 -0800 Received: from abhinavk-linux.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:18:03 -0800 From: Abhinav Kumar To: CC: Abhinav Kumar , , , , , , , , , , , Subject: [PATCH 11/12] drm/msm/dpu: gracefully handle null fb commits for writeback Date: Fri, 4 Feb 2022 13:17:24 -0800 Message-ID: <1644009445-17320-12-git-send-email-quic_abhinavk@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> References: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org kms_writeback test cases also verify with a null fb for the writeback connector job. In addition there are also other commit paths which can result in kickoffs without a valid framebuffer like while closing the fb which results in the callback to drm_atomic_helper_dirtyfb() which internally triggers a commit. Add protection in the dpu driver to ensure that commits for writeback encoders without a valid fb are gracefully skipped. Signed-off-by: Abhinav Kumar --- drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c | 9 +++++++++ drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 21 +++++++++++++++++++++ drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h | 6 ++++++ drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h | 1 + drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 12 ++++++++++++ 5 files changed, 49 insertions(+) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c index e7c9fe1..f7963b0 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c @@ -869,6 +869,13 @@ void dpu_crtc_commit_kickoff(struct drm_crtc *crtc) DPU_ATRACE_BEGIN("crtc_commit"); + drm_for_each_encoder_mask(encoder, crtc->dev, + crtc->state->encoder_mask) { + if (!dpu_encoder_has_valid_fb(encoder)) { + DRM_DEBUG_ATOMIC("invalid FB not kicking off crtc\n"); + goto end; + } + } /* * Encoder will flush/start now, unless it has a tx pending. If so, it * may delay and flush at an irq event (e.g. ppdone) @@ -891,6 +898,8 @@ void dpu_crtc_commit_kickoff(struct drm_crtc *crtc) dpu_encoder_kickoff(encoder); reinit_completion(&dpu_crtc->frame_done_comp); + +end: DPU_ATRACE_END("crtc_commit"); } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index 3746432..e990dbc 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -1832,6 +1832,27 @@ void dpu_encoder_prepare_for_kickoff(struct drm_encoder *drm_enc) } } +bool dpu_encoder_has_valid_fb(struct drm_encoder *drm_enc) +{ + struct dpu_encoder_virt *dpu_enc; + unsigned int i; + struct dpu_encoder_phys *phys; + + dpu_enc = to_dpu_encoder_virt(drm_enc); + + if (drm_enc->encoder_type == DRM_MODE_ENCODER_VIRTUAL) { + for (i = 0; i < dpu_enc->num_phys_encs; i++) { + phys = dpu_enc->phys_encs[i]; + if (phys->ops.has_valid_output_fb && !phys->ops.has_valid_output_fb(phys)) { + DPU_DEBUG("invalid FB not kicking off\n"); + return false; + } + } + } + + return true; +} + void dpu_encoder_kickoff(struct drm_encoder *drm_enc) { struct dpu_encoder_virt *dpu_enc; diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h index da5b6d6..63d90b8 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h @@ -187,4 +187,10 @@ void dpu_encoder_prepare_wb_job(struct drm_encoder *drm_enc, void dpu_encoder_cleanup_wb_job(struct drm_encoder *drm_enc, struct drm_writeback_job *job); +/** + * dpu_encoder_has_valid_fb - cleanup writeback job for the encoder. + * @drm_enc: Pointer to drm encoder structure + */ +bool dpu_encoder_has_valid_fb(struct drm_encoder *drm_enc); + #endif /* __DPU_ENCODER_H__ */ diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h index 80da0a9..5b45b3c 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h @@ -151,6 +151,7 @@ struct dpu_encoder_phys_ops { struct drm_writeback_job *job); void (*cleanup_wb_job)(struct dpu_encoder_phys *phys_enc, struct drm_writeback_job *job); + bool (*has_valid_output_fb)(struct dpu_encoder_phys *phys_enc); }; /** diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c index 783f83e..7eeed79 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c @@ -717,6 +717,16 @@ static void dpu_encoder_phys_wb_cleanup_wb_job(struct dpu_encoder_phys *phys_enc wb_enc->wb_conn = NULL; } +static bool dpu_encoder_phys_wb_has_valid_fb(struct dpu_encoder_phys *phys_enc) +{ + struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); + + if (wb_enc->wb_job) + return true; + else + return false; +} + /** * dpu_encoder_phys_wb_init_ops - initialize writeback operations * @ops: Pointer to encoder operation table @@ -738,6 +748,8 @@ static void dpu_encoder_phys_wb_init_ops(struct dpu_encoder_phys_ops *ops) ops->prepare_wb_job = dpu_encoder_phys_wb_prepare_wb_job; ops->cleanup_wb_job = dpu_encoder_phys_wb_cleanup_wb_job; ops->irq_control = dpu_encoder_phys_wb_irq_ctrl; + ops->has_valid_output_fb = dpu_encoder_phys_wb_has_valid_fb; + } /** From patchwork Fri Feb 4 21:17:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abhinav Kumar X-Patchwork-Id: 539915 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3BECC43217 for ; Fri, 4 Feb 2022 21:19:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244022AbiBDVS7 (ORCPT ); Fri, 4 Feb 2022 16:18:59 -0500 Received: from alexa-out.qualcomm.com ([129.46.98.28]:11351 "EHLO alexa-out.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243847AbiBDVSH (ORCPT ); Fri, 4 Feb 2022 16:18:07 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1644009488; x=1675545488; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=iSjFnXW/40308DRR3ZLcydXqDDWvo3I9upgYQFfAdes=; b=ks5R/6w8zjNX5EoFIFVMkqQQ+c/tJnNkLXj3s+se9k+ojn652OvZ7Jhx SLcm7kPmzFFJA8bt9luP3nSyVIRL7TVF8Xfs7mP6Iqgg+92bcv8870TOB 9z6RfUm0UTLhJxqlR6/mbutSjHfC7DoTrOeG3Lxs1I+uuq4hZvBdKacSY 8=; Received: from ironmsg07-lv.qualcomm.com ([10.47.202.151]) by alexa-out.qualcomm.com with ESMTP; 04 Feb 2022 13:18:07 -0800 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg07-lv.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2022 13:18:07 -0800 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:18:06 -0800 Received: from abhinavk-linux.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Fri, 4 Feb 2022 13:18:06 -0800 From: Abhinav Kumar To: CC: Abhinav Kumar , , , , , , , , , , , Subject: [PATCH 12/12] drm/msm/dpu: add writeback blocks to the display snapshot Date: Fri, 4 Feb 2022 13:17:25 -0800 Message-ID: <1644009445-17320-13-git-send-email-quic_abhinavk@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> References: <1644009445-17320-1-git-send-email-quic_abhinavk@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add writeback block information while capturing the display snapshot. Signed-off-by: Abhinav Kumar Reviewed-by: Dmitry Baryshkov --- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c index 6327ba9..e227b35 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -987,6 +987,11 @@ static void dpu_kms_mdp_snapshot(struct msm_disp_state *disp_state, struct msm_k msm_disp_snapshot_add_block(disp_state, cat->mixer[i].len, dpu_kms->mmio + cat->mixer[i].base, "lm_%d", i); + /* dump WB sub-blocks HW regs info */ + for (i = 0; i < cat->wb_count; i++) + msm_disp_snapshot_add_block(disp_state, cat->wb[i].len, + dpu_kms->mmio + cat->wb[i].base, "wb_%d", i); + msm_disp_snapshot_add_block(disp_state, top->hw.length, dpu_kms->mmio + top->hw.blk_off, "top");