From patchwork Wed Nov 28 17:29:06 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Pundir X-Patchwork-Id: 152334 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp1365319ljp; Wed, 28 Nov 2018 09:29:45 -0800 (PST) X-Google-Smtp-Source: AFSGD/VtRdfw8jOF99wEOO06ugpSajY7ZfcjF5HsOQOZ0B4KYFykhCbZ+/D2cWEZgj32JkiC7bxj X-Received: by 2002:a63:1848:: with SMTP id 8mr33446603pgy.81.1543426184862; Wed, 28 Nov 2018 09:29:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543426184; cv=none; d=google.com; s=arc-20160816; b=B8FwenRJfDnPR0lpJK51JY29IEBeTGo1Nc4OKvzkEgDquiDnfYtA4s270mK3IcwSKq 1ZD2N75z1SXv0peTyTFTM2fCzbo13KmzBzk5ItiU2u8/54TK3nRbOdj/HLcEtZJ3Dbuv ZVQZ0wJMRpwBFIZz0R1lhuRDLoCd0VFCvbmOjVS+Rmwy9H2zEgR0JKSPjfMrTbQo+Owq g0SPaTyotxAFkyoTaxBHDjD3MnytZfNho69Cc53neQtJQYW3H7bMqRspsVjGvKqij3nU J206s3wBfgvu/qzqFAaAV90oc8fA904ZaD14oizCDhf3QCXojquQJBy7qO8Obffea+XE TCRw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=LuYV04xf2VaeiAbwYrHwiwyRFNXi/572nwcB8vYvZDI=; b=v+Q3QAlfM8hxRoYJ8jt23vBI9gUfBsoCzWMz2pJrSru/vV/45VOSYRvIk6TgdU/1DM j+B6XQ6zALZakP8YA2zeD/h0ipjYuA5XFLH82aJ7ne81vNM1qxkuwd8VY/14IE/d/qEQ uUQ282WLH7sQV5/YcaplCriO/tpnxQQ4QRZBnJ0A2fpfmRiJeU10H3tnUidSdl7C3e6e AIt5d+oYpTWwLUsCNCEucY1crZ/WaUbsK4VqgSIcTXB683JFRZSbOwy6gWXNwpqFx1r3 oJC25Eg95g7/DeNRppeAQrW1DhyyksPDGW1wWYcqHydNLC+Kr77MG5QWCuQfhlXEudMd hNrQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=PHakFSM7; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 44si8357481plb.57.2018.11.28.09.29.42; Wed, 28 Nov 2018 09:29:44 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=PHakFSM7; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727867AbeK2Eb4 (ORCPT + 15 others); Wed, 28 Nov 2018 23:31:56 -0500 Received: from mail-pg1-f193.google.com ([209.85.215.193]:41223 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728851AbeK2Ebz (ORCPT ); Wed, 28 Nov 2018 23:31:55 -0500 Received: by mail-pg1-f193.google.com with SMTP id 70so9787663pgh.8 for ; Wed, 28 Nov 2018 09:29:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=LuYV04xf2VaeiAbwYrHwiwyRFNXi/572nwcB8vYvZDI=; b=PHakFSM7VqELLGvrCqaSNcgCExuqbf9bFtwkUDMBfGW7UfKq1Q6sOVxszocWeRDVLh HMMxEQg4tfdvBDLMVYRfxVmIHFePC/bUiLva6XazdIaMj9yzZW3GZDVVpVNnwyS9CDg/ vJ3XvhqK4aHwXG5Nej1Ra/R6aoXxwj3GgJxiA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=LuYV04xf2VaeiAbwYrHwiwyRFNXi/572nwcB8vYvZDI=; b=qubU3k2EHgb/M2B0cRGWYt1k+lb8oE1wIJYz8Hml+W4LuOOuH5QMS0LNAsAZHMmVYP /msVd319sAFqm1DMJSkmeDQTfYodYTfzxKx7E9B9DSFZjGBWYEnN8fzVuOeoh2kz3azs 2SkC9VsoWCBn3hnCbW36+yKLBRE8ZHFX0/tLpg5WAjZZdqcGV+yVwZ4gE2zu55KbgY5F SXgo0ffAGH52rRDGyf+iRY4z+wpe1TfcITIGNvkMrQEaXz0xIcOw+c8T+gFLjoXOSmyW Qro4XjXQ3tq88nOF2h+iOZEYNMlKhkOHnEWHJ6AAhlgQFsDiGkgUrTlLXmCoN4rPHT+f VsIw== X-Gm-Message-State: AGRZ1gKsENlNfnKDen+mOAekfW3YYHG/WF9ND5c1U0hy7nSpuyGH6Uy2 RR8o0R2YQuh+wL0AinvqE6pMJLKnBkQ= X-Received: by 2002:a62:e30d:: with SMTP id g13mr38065519pfh.151.1543426172599; Wed, 28 Nov 2018 09:29:32 -0800 (PST) Received: from localhost.localdomain ([49.207.53.6]) by smtp.gmail.com with ESMTPSA id 84sm13624360pfk.134.2018.11.28.09.29.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 28 Nov 2018 09:29:31 -0800 (PST) From: Amit Pundir To: Greg KH Cc: Stable , Subhash Jadavani , "Martin K . Petersen" Subject: [PATCH for-4.4.y 07/10] scsi: ufs: fix race between clock gating and devfreq scaling work Date: Wed, 28 Nov 2018 22:59:06 +0530 Message-Id: <1543426149-7269-8-git-send-email-amit.pundir@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1543426149-7269-1-git-send-email-amit.pundir@linaro.org> References: <1543426149-7269-1-git-send-email-amit.pundir@linaro.org> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Subhash Jadavani commit 30fc33f1ef475480dc5bea4fe1bda84b003b992c upstream. UFS devfreq clock scaling work may require clocks to be ON if it need to execute some UFS commands hence it may request for clock hold before issuing the command. But if UFS clock gating work is already running in parallel, ungate work would end up waiting for the clock gating work to finish and as clock gating work would also wait for the clock scaling work to finish, we would enter in deadlock state. Here is the call trace during this deadlock state: Workqueue: devfreq_wq devfreq_monitor __switch_to __schedule schedule schedule_timeout wait_for_common wait_for_completion flush_work ufshcd_hold ufshcd_send_uic_cmd ufshcd_dme_get_attr ufs_qcom_set_dme_vs_core_clk_ctrl_clear_div ufs_qcom_clk_scale_notify ufshcd_scale_clks ufshcd_devfreq_target update_devfreq devfreq_monitor process_one_work worker_thread kthread ret_from_fork Workqueue: events ufshcd_gate_work __switch_to __schedule schedule schedule_preempt_disabled __mutex_lock_slowpath mutex_lock devfreq_monitor_suspend devfreq_simple_ondemand_handler devfreq_suspend_device ufshcd_gate_work process_one_work worker_thread kthread ret_from_fork Workqueue: events ufshcd_ungate_work __switch_to __schedule schedule schedule_timeout wait_for_common wait_for_completion flush_work __cancel_work_timer cancel_delayed_work_sync ufshcd_ungate_work process_one_work worker_thread kthread ret_from_fork This change fixes this deadlock by doing this in devfreq work (devfreq_wq): Try cancelling clock gating work. If we are able to cancel gating work or it wasn't scheduled, hold the clock reference count until scaling is in progress. If gate work is already running in parallel, let's skip the frequecy scaling at this time and it will be retried once next scaling window expires. Reviewed-by: Sahitya Tummala Signed-off-by: Subhash Jadavani Signed-off-by: Martin K. Petersen Signed-off-by: Amit Pundir --- drivers/scsi/ufs/ufshcd.c | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) -- 2.7.4 diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index e4c940981eef..7e6ba17d61f8 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -5511,15 +5511,47 @@ static int ufshcd_devfreq_target(struct device *dev, { int err = 0; struct ufs_hba *hba = dev_get_drvdata(dev); + bool release_clk_hold = false; + unsigned long irq_flags; if (!ufshcd_is_clkscaling_enabled(hba)) return -EINVAL; + spin_lock_irqsave(hba->host->host_lock, irq_flags); + if (ufshcd_eh_in_progress(hba)) { + spin_unlock_irqrestore(hba->host->host_lock, irq_flags); + return 0; + } + + if (ufshcd_is_clkgating_allowed(hba) && + (hba->clk_gating.state != CLKS_ON)) { + if (cancel_delayed_work(&hba->clk_gating.gate_work)) { + /* hold the vote until the scaling work is completed */ + hba->clk_gating.active_reqs++; + release_clk_hold = true; + hba->clk_gating.state = CLKS_ON; + } else { + /* + * Clock gating work seems to be running in parallel + * hence skip scaling work to avoid deadlock between + * current scaling work and gating work. + */ + spin_unlock_irqrestore(hba->host->host_lock, irq_flags); + return 0; + } + } + spin_unlock_irqrestore(hba->host->host_lock, irq_flags); + if (*freq == UINT_MAX) err = ufshcd_scale_clks(hba, true); else if (*freq == 0) err = ufshcd_scale_clks(hba, false); + spin_lock_irqsave(hba->host->host_lock, irq_flags); + if (release_clk_hold) + __ufshcd_release(hba); + spin_unlock_irqrestore(hba->host->host_lock, irq_flags); + return err; }