From patchwork Fri Jan 27 10:40:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abel Vesa X-Patchwork-Id: 648091 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A21AEC54EAA for ; Fri, 27 Jan 2023 10:41:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232442AbjA0Klm (ORCPT ); Fri, 27 Jan 2023 05:41:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232036AbjA0Kla (ORCPT ); Fri, 27 Jan 2023 05:41:30 -0500 Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com [IPv6:2a00:1450:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5819779216 for ; Fri, 27 Jan 2023 02:40:59 -0800 (PST) Received: by mail-wr1-x430.google.com with SMTP id q10so4573448wrm.4 for ; Fri, 27 Jan 2023 02:40:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=VsoS+HLMRIQitJW5cLuMou8qpfRBnFIRf9XZouh1e9Q=; b=LO+sAoyMEwdNEnHWYaJlKEDGIbwLsEP/ebM803ORYTt0D1wWGXQWE1WBxnR1wz694Q 6I2JzsmrVmtFbXDXHT/9+E3jUbMfVHCF0EsD7aKRHo/s1JKRfrmqHwgD6BYPbdy+VBLf 4ej96pIguY293PjvbWcAHtpb8e3PjD3ITHtbx7HQZabQvv7zV+eIhtRsKIX2mxsynJzh 906Itbpo2L3Y3oCZ5mfqUKDSc9dghK3F+bGvejJ6i/XllOhHLnTAuyVkCfFxEgwG5CoU XrOiUs02/Q6TEEWVXkFB/e4UgtBOuQD0lKZL4O8gpVQqoO57RoXpGL21ESOyw3UJ5TNQ YxbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=VsoS+HLMRIQitJW5cLuMou8qpfRBnFIRf9XZouh1e9Q=; b=zrZ7Q6AEpfkS/93O4/njf4Rxn/ijiEPQklmbKfdB6/6GjQ6zvkpuY492uwHuVKxPpA BJEss9go57pJ4cq7f7K+sqGAFRbdYuNeyBzhAXsaO9ohzu0+ooWEGijhqzwlnykWduDw Udsnqs3RnMDhABYxKuXBsbMn+ZTg3qDcwto585CVeoh/9VNegeQWFmuKECRY3K5aSd7C 7gJSoYQGO9Jw8a7CzXS8AdJMZZD7eUk5avpi2ozkoQrFMs9HYYIthu5N+uEW53uFes1v /fQUc1v8QEPSjnUA90nCtP+HSd+XKqP5RDozYAyRcXLw0ohqDfpb+geOWp4LrNbwk+8d 8Uvw== X-Gm-Message-State: AFqh2kqwWFB5KrFWu4OwYnz8LU3NJxLaUptUNBv8+D7jrHPOBAk102dW OBcqSAN026qqe8lKwIhWm/FKSA== X-Google-Smtp-Source: AMrXdXs0EK70vgBGw/Uczwfmggrv143kEnn1iNXDjOTsUS+fwt3Hl+f67U/4xB5huJ2TL8fz+9txig== X-Received: by 2002:a05:6000:5c2:b0:2bb:eb3d:8d20 with SMTP id bh2-20020a05600005c200b002bbeb3d8d20mr32723282wrb.43.1674816057617; Fri, 27 Jan 2023 02:40:57 -0800 (PST) Received: from hackbox.lan ([94.52.112.99]) by smtp.gmail.com with ESMTPSA id e21-20020a5d5955000000b002b57bae7174sm3613089wri.5.2023.01.27.02.40.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Jan 2023 02:40:57 -0800 (PST) From: Abel Vesa To: "Rafael J . Wysocki" , Kevin Hilman , Ulf Hansson , Len Brown , Pavel Machek , Greg Kroah-Hartman Cc: Bjorn Andersson , Andy Gross , Konrad Dybcio , linux-pm@vger.kernel.org, Linux Kernel Mailing List , linux-arm-msm@vger.kernel.org, Dmitry Baryshkov , Stephen Boyd Subject: [RFC PATCH v2 1/2] PM: domains: Skip disabling unused domains if provider has sync_state Date: Fri, 27 Jan 2023 12:40:53 +0200 Message-Id: <20230127104054.895129-1-abel.vesa@linaro.org> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Currently, there are cases when a domain needs to remain enabled until the consumer driver probes. Sometimes such consumer drivers may be built as modules. Since the genpd_power_off_unused is called too early for such consumer driver modules to get a chance to probe, the domain, since it is unused, will get disabled. On the other hand, the best time for an unused domain to be disabled is on the provider's sync_state callback. So, if the provider has registered a sync_state callback, assume the unused domains for that provider will be disabled on its sync_state callback. Also provide a generic sync_state callback which disables all the domains unused for the provider that registers it. Signed-off-by: Abel Vesa --- This approach has been applied for unused clocks as well. With this patch merged in, all the providers that have sync_state callback registered will leave the domains enabled unless the provider's sync_state callback explicitly disables them. So those providers will need to add the disabling part to their sync_state callback. On the other hand, the platforms that have cases where domains need to remain enabled (even if unused) until the consumer driver probes, will be able, with this patch in, to run without the pd_ignore_unused kernel argument, which seems to be the case for most Qualcomm platforms, at this moment. The v1 is here: https://lore.kernel.org/all/20230126234013.3638425-1-abel.vesa@linaro.org/ Changes since v1: * added a generic sync state callback to be registered by providers in order to disable the unused domains on their sync state. Also mentioned this in the commit message. drivers/base/power/domain.c | 17 ++++++++++++++++- include/linux/pm_domain.h | 3 +++ 2 files changed, 19 insertions(+), 1 deletion(-) diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c index 84662d338188..c2a5f77c01f3 100644 --- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -1099,7 +1099,8 @@ static int __init genpd_power_off_unused(void) mutex_lock(&gpd_list_lock); list_for_each_entry(genpd, &gpd_list, gpd_list_node) - genpd_queue_power_off_work(genpd); + if (!dev_has_sync_state(genpd->provider->dev)) + genpd_queue_power_off_work(genpd); mutex_unlock(&gpd_list_lock); @@ -1107,6 +1108,20 @@ static int __init genpd_power_off_unused(void) } late_initcall(genpd_power_off_unused); +void genpd_power_off_unused_sync_state(struct device *dev) +{ + struct generic_pm_domain *genpd; + + mutex_lock(&gpd_list_lock); + + list_for_each_entry(genpd, &gpd_list, gpd_list_node) + if (genpd->provider->dev == dev) + genpd_queue_power_off_work(genpd); + + mutex_unlock(&gpd_list_lock); +} +EXPORT_SYMBOL_GPL(genpd_power_off_unused_sync_state); + #ifdef CONFIG_PM_SLEEP /** diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h index f776fb93eaa0..1fd5aa500c81 100644 --- a/include/linux/pm_domain.h +++ b/include/linux/pm_domain.h @@ -351,6 +351,7 @@ struct device *genpd_dev_pm_attach_by_id(struct device *dev, unsigned int index); struct device *genpd_dev_pm_attach_by_name(struct device *dev, const char *name); +void genpd_power_off_unused_sync_state(struct device *dev); #else /* !CONFIG_PM_GENERIC_DOMAINS_OF */ static inline int of_genpd_add_provider_simple(struct device_node *np, struct generic_pm_domain *genpd) @@ -419,6 +420,8 @@ struct generic_pm_domain *of_genpd_remove_last(struct device_node *np) { return ERR_PTR(-EOPNOTSUPP); } + +static inline genpd_power_off_unused_sync_state(struct device *dev) {} #endif /* CONFIG_PM_GENERIC_DOMAINS_OF */ #ifdef CONFIG_PM