From patchwork Sun Apr 12 14:50:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maulik Shah X-Patchwork-Id: 189635 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9153C2BB86 for ; Sun, 12 Apr 2020 14:50:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8477C20708 for ; Sun, 12 Apr 2020 14:50:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="BxrO3HQ6" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727063AbgDLOui (ORCPT ); Sun, 12 Apr 2020 10:50:38 -0400 Received: from mail26.static.mailgun.info ([104.130.122.26]:59015 "EHLO mail26.static.mailgun.info" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727084AbgDLOui (ORCPT ); Sun, 12 Apr 2020 10:50:38 -0400 DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1586703038; h=References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=4ZgYKqHdQMfT+w3htPCV8Q0a/mPEaMPWrEE+lfEDMDY=; b=BxrO3HQ6SrVwJhx8lX+lLYl1QCQiCaBtbwv7LVSfDOQIYAx+2gHPUajYXMSPLwbw8ESkedpi uCbhmbr5vk44eLn4J+8Vg9XWkgs8lZOFVoK3a/kkzju0Uj6E4l2G0+/91mjLydAzayac45DQ QCUex4EGKKlF2+9UNzrNI8d+hNI= X-Mailgun-Sending-Ip: 104.130.122.26 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by mxa.mailgun.org with ESMTP id 5e932abd.7f540702d340-smtp-out-n04; Sun, 12 Apr 2020 14:50:37 -0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 1001) id 08C73C433BA; Sun, 12 Apr 2020 14:50:36 +0000 (UTC) Received: from mkshah-linux.qualcomm.com (blr-c-bdr-fw-01_GlobalNAT_AllZones-Outside.qualcomm.com [103.229.19.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: mkshah) by smtp.codeaurora.org (Postfix) with ESMTPSA id E4AEDC43636; Sun, 12 Apr 2020 14:50:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org E4AEDC43636 Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=none smtp.mailfrom=mkshah@codeaurora.org From: Maulik Shah To: swboyd@chromium.org, evgreen@chromium.org, dianders@chromium.org, bjorn.andersson@linaro.org Cc: linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org, agross@kernel.org, mka@chromium.org, rnayak@codeaurora.org, ilina@codeaurora.org, lsrao@codeaurora.org, Maulik Shah Subject: [PATCH v17 3/6] soc: qcom: rpmh: Invalidate SLEEP and WAKE TCSes before flushing new data Date: Sun, 12 Apr 2020 20:20:01 +0530 Message-Id: <1586703004-13674-4-git-send-email-mkshah@codeaurora.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1586703004-13674-1-git-send-email-mkshah@codeaurora.org> References: <1586703004-13674-1-git-send-email-mkshah@codeaurora.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org TCSes have previously programmed data when rpmh_flush() is called. This can cause old data to trigger along with newly flushed. Fix this by cleaning SLEEP and WAKE TCSes before new data is flushed. With this there is no need to invoke rpmh_rsc_invalidate() call from rpmh_invalidate(). Simplify rpmh_invalidate() by moving invalidate_batch() inside. Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests") Signed-off-by: Maulik Shah Reviewed-by: Douglas Anderson Reviewed-by: Stephen Boyd --- drivers/soc/qcom/rpmh.c | 41 ++++++++++++++++++----------------------- 1 file changed, 18 insertions(+), 23 deletions(-) diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c index 03630ae..a75f3df 100644 --- a/drivers/soc/qcom/rpmh.c +++ b/drivers/soc/qcom/rpmh.c @@ -317,19 +317,6 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr) return ret; } -static void invalidate_batch(struct rpmh_ctrlr *ctrlr) -{ - struct batch_cache_req *req, *tmp; - unsigned long flags; - - spin_lock_irqsave(&ctrlr->cache_lock, flags); - list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list) - kfree(req); - INIT_LIST_HEAD(&ctrlr->batch_cache); - ctrlr->dirty = true; - spin_unlock_irqrestore(&ctrlr->cache_lock, flags); -} - /** * rpmh_write_batch: Write multiple sets of RPMH commands and wait for the * batch to finish. @@ -467,6 +454,13 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr) return 0; } + /* Invalidate the TCSes first to avoid stale data */ + do { + ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr)); + } while (ret == -EAGAIN); + if (ret) + return ret; + /* First flush the cached batch requests */ ret = flush_batch(ctrlr); if (ret) @@ -498,24 +492,25 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr) } /** - * rpmh_invalidate: Invalidate all sleep and active sets - * sets. + * rpmh_invalidate: Invalidate sleep and wake sets in batch_cache * * @dev: The device making the request * - * Invalidate the sleep and active values in the TCS blocks. + * Invalidate the sleep and wake values in batch_cache. */ int rpmh_invalidate(const struct device *dev) { struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev); - int ret; - - invalidate_batch(ctrlr); + struct batch_cache_req *req, *tmp; + unsigned long flags; - do { - ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr)); - } while (ret == -EAGAIN); + spin_lock_irqsave(&ctrlr->cache_lock, flags); + list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list) + kfree(req); + INIT_LIST_HEAD(&ctrlr->batch_cache); + ctrlr->dirty = true; + spin_unlock_irqrestore(&ctrlr->cache_lock, flags); - return ret; + return 0; } EXPORT_SYMBOL(rpmh_invalidate);