From patchwork Wed Nov 28 14:47:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Pundir X-Patchwork-Id: 152304 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp1178789ljp; Wed, 28 Nov 2018 06:48:04 -0800 (PST) X-Google-Smtp-Source: AFSGD/UCwDveJaqbh2p0OP7OtrS2e+kph3iOaMDZErcrS5hR/zDrag4Vhsn3WhvA9tScTwDzpJg7 X-Received: by 2002:a63:4c6:: with SMTP id 189mr33786078pge.391.1543416484163; Wed, 28 Nov 2018 06:48:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543416484; cv=none; d=google.com; s=arc-20160816; b=CGK2ymNOEvyqUvgeYUoWK1jbnhYiss3mRbxNXWTNcyT5xVU5T0RpIkMU6aLJnHduD1 b1/RcssMYDc3TyyvtZNiNCnwYin40AQ0q3vcMnftf++QFjbZ/f3ihi1uakCKzwJN+5e5 Pt3VVSn8R1yTy1X3Wt34GRZa0gb1NVccFfy48u6m9zUV8FhTyC/sbV9EecA7RKXws3rY XQCUgFYVxiDHx+97lBrnLJzW/2Nt/tPk/ZcpJSQiybOaY17Pn2JtKcvXnHuI+hTrV2uo XZCBvLzOlhuoVcR9JgD+8QZzF2ao/9szSIR9906DH3MC0oi7tnCNoK95s16bTjkbBcmk oEGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=lxamVIGZQyzrxr1JKNYKzdRpNVab/atrFVKUVUteCrA=; b=qkpE8TsTlQV/agYGH8NWEnvIBAN3TwB+6QanqUjFfv4ooTXO2V4idcItUwjfpstWSk MuM85qm18FFtEmL6qegeRVg8YTsAkaN8yffMAeHTwOI9yFzFvEMpCuH1TngGdDeOjwla Hl2gJMvvropwgOXrIIE/eYWt3FxaS5U2zoI8rZ3Q1RYNbSC9bjNnDXR7IraycFtnaMtc y2CywVX9mK56Ixf6j9t/hG53GdLqUbYQUnugnlN9N023k021/BhaWFRrKNXrFu0SWvOA w9iJ+T5dKjVvWmZ+nvB3+snfp2dtOFTpYaho+iwJztNc4lm2paJF6VvZ5iQRrfST06uL A3fQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="N/w4v5+H"; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p64si7857277pfa.94.2018.11.28.06.48.03; Wed, 28 Nov 2018 06:48:04 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="N/w4v5+H"; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728526AbeK2Bt5 (ORCPT + 15 others); Wed, 28 Nov 2018 20:49:57 -0500 Received: from mail-pg1-f196.google.com ([209.85.215.196]:43664 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727726AbeK2Bt5 (ORCPT ); Wed, 28 Nov 2018 20:49:57 -0500 Received: by mail-pg1-f196.google.com with SMTP id v28so9588760pgk.10 for ; Wed, 28 Nov 2018 06:48:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=lxamVIGZQyzrxr1JKNYKzdRpNVab/atrFVKUVUteCrA=; b=N/w4v5+H8Dbc+5oBLpuwk7ARl+KcGXO3qI4ecFWcKtrJK76pMDrdl84OYPnDQmyOhw jhvUmb1O3QWDX+vk4GqjQGC+VbKuqibx+ggBBQ646OAkeZNN2gnCBM13f46NIXVYnHI2 wu6fFs9TCTaJfFSoA8NYKcP8hSLT4UNW9GRaI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=lxamVIGZQyzrxr1JKNYKzdRpNVab/atrFVKUVUteCrA=; b=l8ZY28kMiUJmVXegbe5VuS6z+T4MCyf+vVUJcTsxmHrKnSAF4SpxVrjjVEow9+ZeZv YDfqgHYU5lOflq60ZmqHpsmUrOqRiXt4HiUmkhhMlDDo00Gb8fS5Yml59taS5Cl7+5mj kitj/PenJAX2FWbV5NG5Yl8swRKVoN33LLaR6OqxPfj4H1BJF3s0yjx/fSoMpMHYOONu L71Flbegy+ifYC4sfFUhfWuimTfdfRBjVXhdIIfOLI10Ti666m7SQ8c8gxR9wQP+jOgc VdvGxxgz/+7NpkamAMzH/O4tji1rfhyBRhW48gvZfMY+DDLg42epwi+fwtxcggYXyYzb jQAw== X-Gm-Message-State: AA+aEWaQ1l4g0wkaPLjEJuslaqd8Tuu3lEQB3k23ZLc2tgcrwSyNWGdY bWutu0XLincgQQVAP5Q0Jp8mfQ== X-Received: by 2002:a63:741:: with SMTP id 62mr32624352pgh.352.1543416481997; Wed, 28 Nov 2018 06:48:01 -0800 (PST) Received: from localhost.localdomain ([49.207.53.6]) by smtp.gmail.com with ESMTPSA id r80sm14295109pfa.111.2018.11.28.06.47.59 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 28 Nov 2018 06:48:01 -0800 (PST) From: Amit Pundir To: Greg KH Cc: Stable , Subhash Jadavani , "Martin K . Petersen" Subject: [PATCH for-3.18.y 4/5] scsi: ufs: fix race between clock gating and devfreq scaling work Date: Wed, 28 Nov 2018 20:17:46 +0530 Message-Id: <1543416467-2081-5-git-send-email-amit.pundir@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1543416467-2081-1-git-send-email-amit.pundir@linaro.org> References: <1543416467-2081-1-git-send-email-amit.pundir@linaro.org> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Subhash Jadavani commit 30fc33f1ef475480dc5bea4fe1bda84b003b992c upstream. UFS devfreq clock scaling work may require clocks to be ON if it need to execute some UFS commands hence it may request for clock hold before issuing the command. But if UFS clock gating work is already running in parallel, ungate work would end up waiting for the clock gating work to finish and as clock gating work would also wait for the clock scaling work to finish, we would enter in deadlock state. Here is the call trace during this deadlock state: Workqueue: devfreq_wq devfreq_monitor __switch_to __schedule schedule schedule_timeout wait_for_common wait_for_completion flush_work ufshcd_hold ufshcd_send_uic_cmd ufshcd_dme_get_attr ufs_qcom_set_dme_vs_core_clk_ctrl_clear_div ufs_qcom_clk_scale_notify ufshcd_scale_clks ufshcd_devfreq_target update_devfreq devfreq_monitor process_one_work worker_thread kthread ret_from_fork Workqueue: events ufshcd_gate_work __switch_to __schedule schedule schedule_preempt_disabled __mutex_lock_slowpath mutex_lock devfreq_monitor_suspend devfreq_simple_ondemand_handler devfreq_suspend_device ufshcd_gate_work process_one_work worker_thread kthread ret_from_fork Workqueue: events ufshcd_ungate_work __switch_to __schedule schedule schedule_timeout wait_for_common wait_for_completion flush_work __cancel_work_timer cancel_delayed_work_sync ufshcd_ungate_work process_one_work worker_thread kthread ret_from_fork This change fixes this deadlock by doing this in devfreq work (devfreq_wq): Try cancelling clock gating work. If we are able to cancel gating work or it wasn't scheduled, hold the clock reference count until scaling is in progress. If gate work is already running in parallel, let's skip the frequecy scaling at this time and it will be retried once next scaling window expires. Reviewed-by: Sahitya Tummala Signed-off-by: Subhash Jadavani Signed-off-by: Martin K. Petersen Signed-off-by: Amit Pundir --- drivers/scsi/ufs/ufshcd.c | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) -- 2.7.4 diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 7b48fb84a900..e8f3033d991b 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -5406,15 +5406,47 @@ static int ufshcd_devfreq_target(struct device *dev, { int err = 0; struct ufs_hba *hba = dev_get_drvdata(dev); + bool release_clk_hold = false; + unsigned long irq_flags; if (!ufshcd_is_clkscaling_enabled(hba)) return -EINVAL; + spin_lock_irqsave(hba->host->host_lock, irq_flags); + if (ufshcd_eh_in_progress(hba)) { + spin_unlock_irqrestore(hba->host->host_lock, irq_flags); + return 0; + } + + if (ufshcd_is_clkgating_allowed(hba) && + (hba->clk_gating.state != CLKS_ON)) { + if (cancel_delayed_work(&hba->clk_gating.gate_work)) { + /* hold the vote until the scaling work is completed */ + hba->clk_gating.active_reqs++; + release_clk_hold = true; + hba->clk_gating.state = CLKS_ON; + } else { + /* + * Clock gating work seems to be running in parallel + * hence skip scaling work to avoid deadlock between + * current scaling work and gating work. + */ + spin_unlock_irqrestore(hba->host->host_lock, irq_flags); + return 0; + } + } + spin_unlock_irqrestore(hba->host->host_lock, irq_flags); + if (*freq == UINT_MAX) err = ufshcd_scale_clks(hba, true); else if (*freq == 0) err = ufshcd_scale_clks(hba, false); + spin_lock_irqsave(hba->host->host_lock, irq_flags); + if (release_clk_hold) + __ufshcd_release(hba); + spin_unlock_irqrestore(hba->host->host_lock, irq_flags); + return err; }