From patchwork Thu Sep 22 13:16:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Qilong X-Patchwork-Id: 608302 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0F32C6FA92 for ; Thu, 22 Sep 2022 13:13:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230493AbiIVNNA (ORCPT ); Thu, 22 Sep 2022 09:13:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230187AbiIVNM4 (ORCPT ); Thu, 22 Sep 2022 09:12:56 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFB5DEC57F; Thu, 22 Sep 2022 06:12:49 -0700 (PDT) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4MYFwL1fbNzWgyY; Thu, 22 Sep 2022 21:08:50 +0800 (CST) Received: from kwepemm600014.china.huawei.com (7.193.23.54) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Thu, 22 Sep 2022 21:12:46 +0800 Received: from huawei.com (10.90.53.225) by kwepemm600014.china.huawei.com (7.193.23.54) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Thu, 22 Sep 2022 21:12:45 +0800 From: Zhang Qilong To: , , , CC: , Subject: [PATCH v2 -next 1/3] dmaengine: qcom: bam_dma: fix PM usage counter unbalance in bam_dma Date: Thu, 22 Sep 2022 21:16:16 +0800 Message-ID: <20220922131616.104320-1-zhangqilong3@huawei.com> X-Mailer: git-send-email 2.26.0.106.g9fadedd MIME-Version: 1.0 X-Originating-IP: [10.90.53.225] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600014.china.huawei.com (7.193.23.54) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org pm_runtime_get_sync will increment pm usage counter even it failed. Forgetting to putting operation will result in reference leak here. We fix it by replacing it with the newer pm_runtime_resume_and_get to keep usage counter balanced. Fixes:0ac9c3dd0d6fe ("dmaengine: qcom: bam_dma: fix runtime PM underflow") Signed-off-by: Zhang Qilong --- drivers/dma/qcom/bam_dma.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/dma/qcom/bam_dma.c b/drivers/dma/qcom/bam_dma.c index 2ff787df513e..415ad0008f18 100644 --- a/drivers/dma/qcom/bam_dma.c +++ b/drivers/dma/qcom/bam_dma.c @@ -573,7 +573,7 @@ static void bam_free_chan(struct dma_chan *chan) unsigned long flags; int ret; - ret = pm_runtime_get_sync(bdev->dev); + ret = pm_runtime_resume_and_get(bdev->dev); if (ret < 0) return; @@ -776,7 +776,7 @@ static int bam_pause(struct dma_chan *chan) unsigned long flag; int ret; - ret = pm_runtime_get_sync(bdev->dev); + ret = pm_runtime_resume_and_get(bdev->dev); if (ret < 0) return ret; @@ -802,7 +802,7 @@ static int bam_resume(struct dma_chan *chan) unsigned long flag; int ret; - ret = pm_runtime_get_sync(bdev->dev); + ret = pm_runtime_resume_and_get(bdev->dev); if (ret < 0) return ret; @@ -911,7 +911,7 @@ static irqreturn_t bam_dma_irq(int irq, void *data) if (srcs & P_IRQ) tasklet_schedule(&bdev->task); - ret = pm_runtime_get_sync(bdev->dev); + ret = pm_runtime_resume_and_get(bdev->dev); if (ret < 0) return IRQ_NONE; @@ -1029,7 +1029,7 @@ static void bam_start_dma(struct bam_chan *bchan) if (!vd) return; - ret = pm_runtime_get_sync(bdev->dev); + ret = pm_runtime_resume_and_get(bdev->dev); if (ret < 0) return;