From patchwork Wed Nov 28 14:40:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Pundir X-Patchwork-Id: 152298 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp1170991ljp; Wed, 28 Nov 2018 06:40:33 -0800 (PST) X-Google-Smtp-Source: AFSGD/UjiIVwjDCkX+6Ama4e72TckbFp9I2VcWWdORed/QkXj6NAPcwOOaAObcmFzqcvm0lThTXS X-Received: by 2002:a17:902:45:: with SMTP id 63mr35751383pla.272.1543416033638; Wed, 28 Nov 2018 06:40:33 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543416033; cv=none; d=google.com; s=arc-20160816; b=rIz5cliUHqXfSXceDpc2yHCFMkBB/+s5/EMI/FLH8UWc++Q2AD/bfySbgwxK39nzcr Ola7cIJscWra0OipHinLyLfBEU8BTKYXhBlBeFs25Em03r7FSooq6+WCiJK6+Vz+/IM1 T08Feh6NdWvnFIupIS92HIiGweDVLOzHZVCzhjvnFSq4Z/IO8BjadxGSegpWJ4c4N+A6 xcuDEfIcT+8rb09XMfaOBhlM1BEiAhWzrWdyx/JpID+X29nqyjqy8YC3zFTfd/3OH/La 9F4ETUjTMJiVGxM25YVIBwn7naFQi9E+yI1XYLc6xIhv4ZFpKj0D7Ih1XlDJ81q61zPs KNvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=nm52e4sRydvQo9X2NaxI6PWe3nPLLEMjzASU47pmhn4=; b=LwXqf52Lkob/7xmfnmZOtZKCHSpIQBQWs0UZbC0RvFQ7QTeupY027f75WkXCyL94Jt A88SeZ01xhPAzCE4YfPOxrL4Is+uH1jNgyRg+g0MfMi39pDbxsldPrkiY24C8S5RV3Xx pO7sTuWLKPzp50BiUzWCpIrVzjB+zufdmQl31H+KOKhFlyRGSWjAyz7qA5rdBTZg8tE0 Fujnqhvomvp+KCXDs+jbO7uZ2ywKFU0K6DAOfFBhE/WoP0eHc8Ci3SXaPcwaGMmubzWI /d1v3MKto1TD9L/9Dxz7ZowkeoUd+oSR/SahFPJ3qZGAhqaNXwk2zoEwGTdZCga0PuBw O6Zg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=kurKaQnJ; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l23si7489048pgh.533.2018.11.28.06.40.33; Wed, 28 Nov 2018 06:40:33 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=kurKaQnJ; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728223AbeK2BmZ (ORCPT + 15 others); Wed, 28 Nov 2018 20:42:25 -0500 Received: from mail-pl1-f193.google.com ([209.85.214.193]:41283 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727941AbeK2BmZ (ORCPT ); Wed, 28 Nov 2018 20:42:25 -0500 Received: by mail-pl1-f193.google.com with SMTP id u6so17481247plm.8 for ; Wed, 28 Nov 2018 06:40:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=nm52e4sRydvQo9X2NaxI6PWe3nPLLEMjzASU47pmhn4=; b=kurKaQnJd05/OfkQ3OxJx/pb3IqV3NvQmb+xMbMNgqXm9/2wGFNJ/v4zxSRfXj73DR jB4CdRPddNM/RMFbZyeh9cAR/OFFRz8zZqXYhlu6p1W8ejRHUArGYSudEWSed1UFZ3SF AsRPwhDYU5XcoI59lNG9otJdCn+VzLrs5Wj9Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=nm52e4sRydvQo9X2NaxI6PWe3nPLLEMjzASU47pmhn4=; b=FNEagRztsunRBg7C4WJxIgLZWiXCnlob/sv4zmVJGgSCABqG7SnrIlOkAMruCgF+v7 uu3HGlcY6JlJDnzCdHig5dMoLttD/zynFiKLGbgRD0/Xje59+lLgUrcicmHhs94DQPil iStiwBoT249twd9UgsbEmBseHaEPWhv3xNEpOlPTQppk4Gknnedqq89u56qIeNGzrinT lFVZ3n2FSECVi1RdOOtwXAorRbXtjXbC6b8v+aOsXBlUJpyP4jlVqhzkRhlbaSZ26AoP /rGx1cY8pC/D1UCzlDEUg2PO3x1jRb7VYE4rgzchrKhuN3HuhNyJ15LZfe7IBg4+Be14 Bvdw== X-Gm-Message-State: AA+aEWaTg7svh2atHlimkZUr5ZeT5FbGUVG/8gKYppSezR/MpZkCSZsp shBo3k6HAqSxOhW7Q6CReN9TFw== X-Received: by 2002:a17:902:8687:: with SMTP id g7mr10743421plo.96.1543416031879; Wed, 28 Nov 2018 06:40:31 -0800 (PST) Received: from localhost.localdomain ([49.207.53.6]) by smtp.gmail.com with ESMTPSA id b26sm24227637pfe.91.2018.11.28.06.40.29 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 28 Nov 2018 06:40:31 -0800 (PST) From: Amit Pundir To: Greg KH Cc: Stable , Subhash Jadavani , "Martin K . Petersen" Subject: [PATCH for-4.9.y 09/10] scsi: ufs: fix race between clock gating and devfreq scaling work Date: Wed, 28 Nov 2018 20:10:03 +0530 Message-Id: <1543416004-1547-10-git-send-email-amit.pundir@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1543416004-1547-1-git-send-email-amit.pundir@linaro.org> References: <1543416004-1547-1-git-send-email-amit.pundir@linaro.org> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Subhash Jadavani commit 30fc33f1ef475480dc5bea4fe1bda84b003b992c upstream. UFS devfreq clock scaling work may require clocks to be ON if it need to execute some UFS commands hence it may request for clock hold before issuing the command. But if UFS clock gating work is already running in parallel, ungate work would end up waiting for the clock gating work to finish and as clock gating work would also wait for the clock scaling work to finish, we would enter in deadlock state. Here is the call trace during this deadlock state: Workqueue: devfreq_wq devfreq_monitor __switch_to __schedule schedule schedule_timeout wait_for_common wait_for_completion flush_work ufshcd_hold ufshcd_send_uic_cmd ufshcd_dme_get_attr ufs_qcom_set_dme_vs_core_clk_ctrl_clear_div ufs_qcom_clk_scale_notify ufshcd_scale_clks ufshcd_devfreq_target update_devfreq devfreq_monitor process_one_work worker_thread kthread ret_from_fork Workqueue: events ufshcd_gate_work __switch_to __schedule schedule schedule_preempt_disabled __mutex_lock_slowpath mutex_lock devfreq_monitor_suspend devfreq_simple_ondemand_handler devfreq_suspend_device ufshcd_gate_work process_one_work worker_thread kthread ret_from_fork Workqueue: events ufshcd_ungate_work __switch_to __schedule schedule schedule_timeout wait_for_common wait_for_completion flush_work __cancel_work_timer cancel_delayed_work_sync ufshcd_ungate_work process_one_work worker_thread kthread ret_from_fork This change fixes this deadlock by doing this in devfreq work (devfreq_wq): Try cancelling clock gating work. If we are able to cancel gating work or it wasn't scheduled, hold the clock reference count until scaling is in progress. If gate work is already running in parallel, let's skip the frequecy scaling at this time and it will be retried once next scaling window expires. Reviewed-by: Sahitya Tummala Signed-off-by: Subhash Jadavani Signed-off-by: Martin K. Petersen Signed-off-by: Amit Pundir --- drivers/scsi/ufs/ufshcd.c | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) -- 2.7.4 diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 6130e10145b5..39732e93d460 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -6633,15 +6633,47 @@ static int ufshcd_devfreq_target(struct device *dev, { int err = 0; struct ufs_hba *hba = dev_get_drvdata(dev); + bool release_clk_hold = false; + unsigned long irq_flags; if (!ufshcd_is_clkscaling_enabled(hba)) return -EINVAL; + spin_lock_irqsave(hba->host->host_lock, irq_flags); + if (ufshcd_eh_in_progress(hba)) { + spin_unlock_irqrestore(hba->host->host_lock, irq_flags); + return 0; + } + + if (ufshcd_is_clkgating_allowed(hba) && + (hba->clk_gating.state != CLKS_ON)) { + if (cancel_delayed_work(&hba->clk_gating.gate_work)) { + /* hold the vote until the scaling work is completed */ + hba->clk_gating.active_reqs++; + release_clk_hold = true; + hba->clk_gating.state = CLKS_ON; + } else { + /* + * Clock gating work seems to be running in parallel + * hence skip scaling work to avoid deadlock between + * current scaling work and gating work. + */ + spin_unlock_irqrestore(hba->host->host_lock, irq_flags); + return 0; + } + } + spin_unlock_irqrestore(hba->host->host_lock, irq_flags); + if (*freq == UINT_MAX) err = ufshcd_scale_clks(hba, true); else if (*freq == 0) err = ufshcd_scale_clks(hba, false); + spin_lock_irqsave(hba->host->host_lock, irq_flags); + if (release_clk_hold) + __ufshcd_release(hba); + spin_unlock_irqrestore(hba->host->host_lock, irq_flags); + return err; }