From patchwork Thu Jan 12 17:17:43 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ulf Hansson X-Patchwork-Id: 91195 Delivered-To: patches@linaro.org Received: by 10.140.20.99 with SMTP id 90csp1724209qgi; Thu, 12 Jan 2017 09:18:02 -0800 (PST) X-Received: by 10.46.13.17 with SMTP id 17mr5435864ljn.27.1484241481795; Thu, 12 Jan 2017 09:18:01 -0800 (PST) Return-Path: Received: from mail-lf0-x233.google.com (mail-lf0-x233.google.com. [2a00:1450:4010:c07::233]) by mx.google.com with ESMTPS id l12si6033193lfl.223.2017.01.12.09.18.01 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 12 Jan 2017 09:18:01 -0800 (PST) Received-SPF: pass (google.com: domain of ulf.hansson@linaro.org designates 2a00:1450:4010:c07::233 as permitted sender) client-ip=2a00:1450:4010:c07::233; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: domain of ulf.hansson@linaro.org designates 2a00:1450:4010:c07::233 as permitted sender) smtp.mailfrom=ulf.hansson@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: by mail-lf0-x233.google.com with SMTP id z134so14583071lff.3 for ; Thu, 12 Jan 2017 09:18:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=zOQRCHWxyrOQ7ZHaZmKc6/gJYDeKnsB8z93e+ycXjLw=; b=GaMSxcxVziQ1pz6ObcyHgu2+g5UuKZFeb8zMTcNw7/4u85myxfgZI6ebV+Gc3z2Isz hhG20WP4z2UZohmh2QcBneFSfw0wbpne/wCDLS3XFVPTLDNMRt04dNjznJGEkpEYwNF8 +3HKGXNmlDRSkmUAAHhAVEWEmZCDAVlV7g/aE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=zOQRCHWxyrOQ7ZHaZmKc6/gJYDeKnsB8z93e+ycXjLw=; b=KhFv4W5wmZXD9Nv25ZuNBB29oL8owSSFM0UqtQ7ltf04SNIBfk57VYUJg2hZluWdcO RNxT+LOiDhe4aFm+o8s66rPEnR2GC4LkC0s8E/lfe/Z7pUfgyXrTRbGWCHw+4bcSIzo5 Z8/9OC+N7qkAEg5hMiVZcI+TpVANh7SaMZ5ouvwCgSB+hsnO8kWILBiz9Kfo2jKZpmEL r7i2WRNBe+eKhqj6+dp+0CG+kd0WO+YO5dkhtGnE9x4PRbOsA4si3D0y9ee5BfFojymA 8v3spZP4cqpWlcqMbVjq0VzgeP1v4t+sqGMsBC0Y15FUtHmlaWqwD0OPSaYqnuUP11t7 kmpA== X-Gm-Message-State: AIkVDXKd8957Nuq8ZWQJHOuU/YLKFdD1POryspZsVk7yeRVTbOdE9C568zpjRTbNEVux41JgW3w= X-Received: by 10.25.198.6 with SMTP id w6mr6175452lff.175.1484241481294; Thu, 12 Jan 2017 09:18:01 -0800 (PST) Return-Path: Received: from localhost.localdomain (h-155-4-221-67.na.cust.bahnhof.se. [155.4.221.67]) by smtp.gmail.com with ESMTPSA id 10sm2625663ljo.36.2017.01.12.09.17.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 12 Jan 2017 09:17:59 -0800 (PST) From: Ulf Hansson To: "Rafael J. Wysocki" , Ulf Hansson , linux-pm@vger.kernel.org Cc: Len Brown , Pavel Machek , Kevin Hilman , Geert Uytterhoeven , Lina Iyer , Jon Hunter , Marek Szyprowski , Brian Norris Subject: [RESEND PATCH 2/2] PM / Domains: Fix asynchronous execution of *noirq() callbacks Date: Thu, 12 Jan 2017 18:17:43 +0100 Message-Id: <1484241463-28435-3-git-send-email-ulf.hansson@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1484241463-28435-1-git-send-email-ulf.hansson@linaro.org> References: <1484241463-28435-1-git-send-email-ulf.hansson@linaro.org> As the PM core may invoke the *noirq() callbacks asynchronously, the current lock-less approach in genpd doesn't work. The consequence is that we may find concurrent operations racing to power on/off the PM domain. As of now, no immediate errors has been reported, but it's probably only a matter time. Therefor let's fix the problem now before this becomes a real issue, by deploying the locking scheme to the relevant functions. Reported-by: Brian Norris Signed-off-by: Ulf Hansson --- drivers/base/power/domain.c | 62 ++++++++++++++++++++++++--------------------- 1 file changed, 33 insertions(+), 29 deletions(-) -- 1.9.1 diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c index fd2e3e1..6b23d82 100644 --- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -729,16 +729,17 @@ static bool genpd_dev_active_wakeup(struct generic_pm_domain *genpd, /** * genpd_sync_power_off - Synchronously power off a PM domain and its masters. * @genpd: PM domain to power off, if possible. + * @depth: nesting count for lockdep. * * Check if the given PM domain can be powered off (during system suspend or * hibernation) and do that if so. Also, in that case propagate to its masters. * * This function is only called in "noirq" and "syscore" stages of system power - * transitions, so it need not acquire locks (all of the "noirq" callbacks are - * executed sequentially, so it is guaranteed that it will never run twice in - * parallel). + * transitions, but since the "noirq" callbacks may be executed asynchronously, + * the lock must be held. */ -static void genpd_sync_power_off(struct generic_pm_domain *genpd) +static void genpd_sync_power_off(struct generic_pm_domain *genpd, + unsigned int depth) { struct gpd_link *link; @@ -757,20 +758,24 @@ static void genpd_sync_power_off(struct generic_pm_domain *genpd) list_for_each_entry(link, &genpd->slave_links, slave_node) { genpd_sd_counter_dec(link->master); - genpd_sync_power_off(link->master); + + genpd_lock_nested(link->master, depth + 1); + genpd_sync_power_off(link->master, depth + 1); + genpd_unlock(link->master); } } /** * genpd_sync_power_on - Synchronously power on a PM domain and its masters. * @genpd: PM domain to power on. + * @depth: nesting count for lockdep. * * This function is only called in "noirq" and "syscore" stages of system power - * transitions, so it need not acquire locks (all of the "noirq" callbacks are - * executed sequentially, so it is guaranteed that it will never run twice in - * parallel). + * transitions, but since the "noirq" callbacks may be executed asynchronously, + * the lock must be held. */ -static void genpd_sync_power_on(struct generic_pm_domain *genpd) +static void genpd_sync_power_on(struct generic_pm_domain *genpd, + unsigned int depth) { struct gpd_link *link; @@ -778,8 +783,11 @@ static void genpd_sync_power_on(struct generic_pm_domain *genpd) return; list_for_each_entry(link, &genpd->slave_links, slave_node) { - genpd_sync_power_on(link->master); genpd_sd_counter_inc(link->master); + + genpd_lock_nested(link->master, depth + 1); + genpd_sync_power_on(link->master, depth + 1); + genpd_unlock(link->master); } _genpd_power_on(genpd, false); @@ -888,13 +896,10 @@ static int pm_genpd_suspend_noirq(struct device *dev) return ret; } - /* - * Since all of the "noirq" callbacks are executed sequentially, it is - * guaranteed that this function will never run twice in parallel for - * the same PM domain, so it is not necessary to use locking here. - */ + genpd_lock(genpd); genpd->suspended_count++; - genpd_sync_power_off(genpd); + genpd_sync_power_off(genpd, 0); + genpd_unlock(genpd); return 0; } @@ -919,13 +924,10 @@ static int pm_genpd_resume_noirq(struct device *dev) if (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev)) return 0; - /* - * Since all of the "noirq" callbacks are executed sequentially, it is - * guaranteed that this function will never run twice in parallel for - * the same PM domain, so it is not necessary to use locking here. - */ - genpd_sync_power_on(genpd); + genpd_lock(genpd); + genpd_sync_power_on(genpd, 0); genpd->suspended_count--; + genpd_unlock(genpd); if (genpd->dev_ops.stop && genpd->dev_ops.start) ret = pm_runtime_force_resume(dev); @@ -1002,13 +1004,10 @@ static int pm_genpd_restore_noirq(struct device *dev) return -EINVAL; /* - * Since all of the "noirq" callbacks are executed sequentially, it is - * guaranteed that this function will never run twice in parallel for - * the same PM domain, so it is not necessary to use locking here. - * * At this point suspended_count == 0 means we are being run for the * first time for the given domain in the present cycle. */ + genpd_lock(genpd); if (genpd->suspended_count++ == 0) /* * The boot kernel might put the domain into arbitrary state, @@ -1017,7 +1016,8 @@ static int pm_genpd_restore_noirq(struct device *dev) */ genpd->status = GPD_STATE_POWER_OFF; - genpd_sync_power_on(genpd); + genpd_sync_power_on(genpd, 0); + genpd_unlock(genpd); if (genpd->dev_ops.stop && genpd->dev_ops.start) ret = pm_runtime_force_resume(dev); @@ -1070,13 +1070,17 @@ static void genpd_syscore_switch(struct device *dev, bool suspend) if (!pm_genpd_present(genpd)) return; + genpd_lock(genpd); + if (suspend) { genpd->suspended_count++; - genpd_sync_power_off(genpd); + genpd_sync_power_off(genpd, 0); } else { - genpd_sync_power_on(genpd); + genpd_sync_power_on(genpd, 0); genpd->suspended_count--; } + + genpd_unlock(genpd); } void pm_genpd_syscore_poweroff(struct device *dev)