From patchwork Fri Mar 7 19:15:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Rafael J. Wysocki" X-Patchwork-Id: 871799 Received: from cloudserver094114.home.pl (cloudserver094114.home.pl [79.96.170.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AFFAB256C63; Fri, 7 Mar 2025 19:42:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=79.96.170.134 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741376574; cv=none; b=Mf81nRq8kzcrRRSvzq1qbdXt0/4jikBsyF7bTz0eJ8bjGQQyCSTPld0kf5mc/jJNTLQr6AL7x/AXBmn11h5lI1LuAl8NlYa1saw1gYpr16KHgVyUYiKCJOIL8r082tkuUDHx5vKr5VlCompERnx1OZzyAsJRqaBqiG7FbwWbbMQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741376574; c=relaxed/simple; bh=1y59WxewAlScMgxwdnWQ5QLkGAO48EMO5UaovuoTGnI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=qYnvvS6KT+6g7bcSfqsgZGMc8ArqJ/fs6H4OtFLO0G17Y2PTyPpX2QEXk3koLXPBOf/1JB/IUNLH99aGGPqg/1G83OJqonO2lYiEd+U3D2Vw36io8iiEelZbV5N9KGN8DdkYIa3zXYbY4z5d35l+P2MK8hpeYYzSt5NExZ1418M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net; spf=pass smtp.mailfrom=rjwysocki.net; dkim=pass (2048-bit key) header.d=rjwysocki.net header.i=@rjwysocki.net header.b=N1RWcxwA; arc=none smtp.client-ip=79.96.170.134 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rjwysocki.net header.i=@rjwysocki.net header.b="N1RWcxwA" Received: from localhost (127.0.0.1) (HELO v370.home.net.pl) by /usr/run/smtp (/usr/run/postfix/private/idea_relay_lmtp) via UNIX with SMTP (IdeaSmtpServer 6.3.1) id b9301e00359c0706; Fri, 7 Mar 2025 20:42:45 +0100 Received: from kreacher.localnet (unknown [195.136.19.94]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by cloudserver094114.home.pl (Postfix) with ESMTPSA id 8824C9A0BFB; Fri, 7 Mar 2025 20:42:44 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=rjwysocki.net; s=dkim; t=1741376565; bh=1y59WxewAlScMgxwdnWQ5QLkGAO48EMO5UaovuoTGnI=; h=From:Subject:Date; b=N1RWcxwAi+uD3EAvZmNKq86mE4KaJaYeIh3z6QpLZSt8nP9Vuxgr42l4k9PY6ieT4 BR+UibhephL7j7Xz4tFZsUmbA+9yfxkba9jLv37evbCr0mRZp8GeNHnjYVVrG5GADm s9Xsb6Yhm2rZ7G6wQRCrATBMgqeHPTVgfaMg1XjGylTfAFXoyZJec2pFbQWfZFgtaR uCZAqnTZJkAOjqIzJZKLbcILRqCVzF0fYMRWzuptMaTKhGGArHRD2FnNO27TZVVw9P lWRzFWox2L2lfNGpueRmPrhkmCdsYCMwNqpWQJEHLfI2A+ODFh5ILPEDaCzoftmCYq oWS0Io7FAjSHg== From: "Rafael J. Wysocki" To: Linux PM Cc: LKML , Lukasz Luba , Peter Zijlstra , Srinivas Pandruvada , Dietmar Eggemann , Morten Rasmussen , Vincent Guittot , Ricardo Neri , Pierre Gondois , Christian Loehle , Viresh Kumar Subject: [RFC][PATCH v0.3 1/6] cpufreq/sched: schedutil: Add helper for governor checks Date: Fri, 07 Mar 2025 20:15:42 +0100 Message-ID: <1840739.VLH7GnMWUR@rjwysocki.net> In-Reply-To: <22640172.EfDdHjke4D@rjwysocki.net> References: <22640172.EfDdHjke4D@rjwysocki.net> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CLIENT-IP: 195.136.19.94 X-CLIENT-HOSTNAME: 195.136.19.94 X-VADE-SPAMSTATE: clean X-VADE-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgeefvddrtddtgdduudduheegucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecujffqoffgrffnpdggtffipffknecuuegrihhlohhuthemucduhedtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjughrpefhvfevufffkfgjfhgggfgtsehtufertddttdejnecuhfhrohhmpedftfgrfhgrvghlucflrdcuhgihshhotghkihdfuceorhhjfiesrhhjfiihshhotghkihdrnhgvtheqnecuggftrfgrthhtvghrnhepvdffueeitdfgvddtudegueejtdffteetgeefkeffvdeftddttdeuhfegfedvjefhnecukfhppeduleehrddufeeirdduledrleegnecuvehluhhsthgvrhfuihiivgepudenucfrrghrrghmpehinhgvthepudelhedrudefiedrudelrdelgedphhgvlhhopehkrhgvrggthhgvrhdrlhhotggrlhhnvghtpdhmrghilhhfrhhomheprhhjfiesrhhjfiihshhotghkihdrnhgvthdpnhgspghrtghpthhtohepuddvpdhrtghpthhtoheplhhinhhugidqphhmsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhukhgrshiirdhluhgsrgesrghrmhdrtghomhdprhgtphhtthhopehpvghtvghriiesihhnfhhrrgguvggrugdrohhrghdprhgtphhtthhopehsrhhinhhivhgrshdrphgrnhgurhhuvhgruggrsehlihhnuhigrdh X-DCC--Metrics: v370.home.net.pl 1024; Body=12 Fuz1=12 Fuz2=12 From: Rafael J. Wysocki Add a helper for checking if schedutil is the current governor for a given cpufreq policy and use it in sched_is_eas_possible() to avoid accessing cpufreq policy internals directly from there. No intentional functional impact. Signed-off-by: Rafael J. Wysocki --- include/linux/cpufreq.h | 9 +++++++++ kernel/sched/cpufreq_schedutil.c | 9 +++++++-- kernel/sched/sched.h | 2 -- kernel/sched/topology.c | 6 +++--- 4 files changed, 19 insertions(+), 7 deletions(-) --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h @@ -641,6 +641,15 @@ struct cpufreq_governor *cpufreq_default_governor(void); struct cpufreq_governor *cpufreq_fallback_governor(void); +#ifdef CONFIG_CPU_FREQ_GOV_SCHEDUTIL +bool sugov_is_cpufreq_governor(struct cpufreq_policy *policy); +#else +static inline bool sugov_is_cpufreq_governor(struct cpufreq_policy *policy) +{ + return false; +} +#endif + static inline void cpufreq_policy_apply_limits(struct cpufreq_policy *policy) { if (policy->max < policy->cur) --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -604,7 +604,7 @@ /********************** cpufreq governor interface *********************/ -struct cpufreq_governor schedutil_gov; +static struct cpufreq_governor schedutil_gov; static struct sugov_policy *sugov_policy_alloc(struct cpufreq_policy *policy) { @@ -874,7 +874,7 @@ sg_policy->limits_changed = true; } -struct cpufreq_governor schedutil_gov = { +static struct cpufreq_governor schedutil_gov = { .name = "schedutil", .owner = THIS_MODULE, .flags = CPUFREQ_GOV_DYNAMIC_SWITCHING, @@ -892,4 +892,9 @@ } #endif +bool sugov_is_cpufreq_governor(struct cpufreq_policy *policy) +{ + return policy->governor == &schedutil_gov; +} + cpufreq_governor_init(schedutil_gov); --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3552,8 +3552,6 @@ return static_branch_unlikely(&sched_energy_present); } -extern struct cpufreq_governor schedutil_gov; - #else /* ! (CONFIG_ENERGY_MODEL && CONFIG_CPU_FREQ_GOV_SCHEDUTIL) */ #define perf_domain_span(pd) NULL --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -216,7 +216,7 @@ { bool any_asym_capacity = false; struct cpufreq_policy *policy; - struct cpufreq_governor *gov; + bool policy_is_ready; int i; /* EAS is enabled for asymmetric CPU capacity topologies. */ @@ -261,9 +261,9 @@ } return false; } - gov = policy->governor; + policy_is_ready = sugov_is_cpufreq_governor(policy); cpufreq_cpu_put(policy); - if (gov != &schedutil_gov) { + if (!policy_is_ready) { if (sched_debug()) { pr_info("rd %*pbl: Checking EAS, schedutil is mandatory\n", cpumask_pr_args(cpu_mask)); From patchwork Fri Mar 7 19:16:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Rafael J. Wysocki" X-Patchwork-Id: 871432 Received: from cloudserver094114.home.pl (cloudserver094114.home.pl [79.96.170.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B0027256C64; Fri, 7 Mar 2025 19:42:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=79.96.170.134 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741376577; cv=none; b=lmcWQ+ewlaSxxmjhZ8rKcjXGd2gGZDyvrhoEmPlpo4W4TsWKO4BPf8nFzUboLOf5KMxCA6y/7kJC51JMJPppv9jJGfbTDm3xLBNvpeHUnf61sMUcUthE2Bovaegg/i+SRuix5hvlmUBpfonAKvd6251PVci1gBO4QR2uvccNkpw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741376577; c=relaxed/simple; bh=25uNFJVAZnPmoWvltcPZLOgQK9aoLiwhfXgiuw3qxHM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=HUTfjyhpeR5om72LIYEVdE/B78ZqIgklfsB6dH5LCuPRBLymOK5XFNF2scn62DmgkyUAxG1hitYXhcY/fHCXGlGhLgWzE9t5oQSn25sQdKZg+lPerGG1v/qV9EHDH57Nz/R/LTSnXv00LNAVjUR9/R89udKP330uDxtt2/L5rXs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net; spf=pass smtp.mailfrom=rjwysocki.net; dkim=pass (2048-bit key) header.d=rjwysocki.net header.i=@rjwysocki.net header.b=ELAXfuXQ; arc=none smtp.client-ip=79.96.170.134 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rjwysocki.net header.i=@rjwysocki.net header.b="ELAXfuXQ" Received: from localhost (127.0.0.1) (HELO v370.home.net.pl) by /usr/run/smtp (/usr/run/postfix/private/idea_relay_lmtp) via UNIX with SMTP (IdeaSmtpServer 6.3.1) id 018cce7891fe7508; Fri, 7 Mar 2025 20:42:44 +0100 Received: from kreacher.localnet (unknown [195.136.19.94]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by cloudserver094114.home.pl (Postfix) with ESMTPSA id 6AE0C9A0BFB; Fri, 7 Mar 2025 20:42:43 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=rjwysocki.net; s=dkim; t=1741376563; bh=25uNFJVAZnPmoWvltcPZLOgQK9aoLiwhfXgiuw3qxHM=; h=From:Subject:Date; b=ELAXfuXQgEVAXvvswdzovxljFBYapZLrai339vgE5is+r9PSZ/m3cf0JS13IQTYUN 32eIXsA2Q6Hac+8ZGuUgMeRCvviHQjhxQN95/dNZIKshTGb5OE3B5Nckrjldjn/J7Z R0fHeLxs485LG6lRTVrWLC3xQ861Ma+gGSNGXoPucWWmyDTYYsNgE2g0gwE9wbj38T xzR+reMU/UCEqGwRp9WgbritoN/xmPDAOjTrilzPzf01exML6LYFAWuqMvnuySmydb GA00K/18+ncJJh5wlOu0cNyqyxhXdXwjvtt3fujqCJuYWCmxXfpW83Qt0FqcdNPLwz VJ7V4SlJwACiw== From: "Rafael J. Wysocki" To: Linux PM Cc: LKML , Lukasz Luba , Peter Zijlstra , Srinivas Pandruvada , Dietmar Eggemann , Morten Rasmussen , Vincent Guittot , Ricardo Neri , Pierre Gondois , Christian Loehle , Viresh Kumar Subject: [RFC][PATCH v0.3 2/6] cpufreq/sched: Move cpufreq-specific EAS checks to cpufreq Date: Fri, 07 Mar 2025 20:16:03 +0100 Message-ID: <2038066.usQuhbGJ8B@rjwysocki.net> In-Reply-To: <22640172.EfDdHjke4D@rjwysocki.net> References: <22640172.EfDdHjke4D@rjwysocki.net> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CLIENT-IP: 195.136.19.94 X-CLIENT-HOSTNAME: 195.136.19.94 X-VADE-SPAMSTATE: clean X-VADE-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgeefvddrtddtgdduudduheefucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecujffqoffgrffnpdggtffipffknecuuegrihhlohhuthemucduhedtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjughrpefhvfevufffkfgjfhgggfgtsehtufertddttdejnecuhfhrohhmpedftfgrfhgrvghlucflrdcuhgihshhotghkihdfuceorhhjfiesrhhjfiihshhotghkihdrnhgvtheqnecuggftrfgrthhtvghrnhepvdffueeitdfgvddtudegueejtdffteetgeefkeffvdeftddttdeuhfegfedvjefhnecukfhppeduleehrddufeeirdduledrleegnecuvehluhhsthgvrhfuihiivgepudenucfrrghrrghmpehinhgvthepudelhedrudefiedrudelrdelgedphhgvlhhopehkrhgvrggthhgvrhdrlhhotggrlhhnvghtpdhmrghilhhfrhhomheprhhjfiesrhhjfiihshhotghkihdrnhgvthdpnhgspghrtghpthhtohepuddvpdhrtghpthhtoheplhhinhhugidqphhmsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhukhgrshiirdhluhgsrgesrghrmhdrtghomhdprhgtphhtthhopehpvghtvghriiesihhnfhhrrgguvggrugdrohhrghdprhgtphhtthhopehsrhhinhhivhgrshdrphgrnhgurhhuvhgruggrsehlihhnuhigrdh X-DCC--Metrics: v370.home.net.pl 1024; Body=12 Fuz1=12 Fuz2=12 From: Rafael J. Wysocki Doing cpufreq-specific EAS checks that require accessing policy internals directly from sched_is_eas_possible() is a bit unfortunate, so introduce cpufreq_ready_for_eas() in cpufreq, move those checks into that new function and make sched_is_eas_possible() call it. Signed-off-by: Rafael J. Wysocki --- drivers/cpufreq/cpufreq.c | 30 ++++++++++++++++++++++++++++++ include/linux/cpufreq.h | 2 ++ kernel/sched/topology.c | 25 +++++-------------------- 3 files changed, 37 insertions(+), 20 deletions(-) --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -3052,6 +3052,36 @@ return 0; } + +bool cpufreq_ready_for_eas(const struct cpumask *cpu_mask) +{ + int i; + + /* Do not attempt EAS if schedutil is not being used. */ + for_each_cpu(i, cpu_mask) { + struct cpufreq_policy *policy; + bool policy_is_ready; + + policy = cpufreq_cpu_get(i); + if (!policy) { + pr_debug("rd %*pbl: cpufreq policy not set for CPU: %d", + cpumask_pr_args(cpu_mask), i); + + return false; + } + policy_is_ready = sugov_is_cpufreq_governor(policy); + cpufreq_cpu_put(policy); + if (!policy_is_ready) { + pr_debug("rd %*pbl: schedutil is mandatory for EAS\n", + cpumask_pr_args(cpu_mask)); + + return false; + } + } + + return true; +} + module_param(off, int, 0444); module_param_string(default_governor, default_governor, CPUFREQ_NAME_LEN, 0444); core_initcall(cpufreq_core_init); --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h @@ -1215,6 +1215,8 @@ struct cpufreq_frequency_table *table, unsigned int transition_latency); +bool cpufreq_ready_for_eas(const struct cpumask *cpu_mask); + static inline void cpufreq_register_em_with_opp(struct cpufreq_policy *policy) { dev_pm_opp_of_register_em(get_cpu_device(policy->cpu), --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -215,8 +215,6 @@ static bool sched_is_eas_possible(const struct cpumask *cpu_mask) { bool any_asym_capacity = false; - struct cpufreq_policy *policy; - bool policy_is_ready; int i; /* EAS is enabled for asymmetric CPU capacity topologies. */ @@ -251,25 +249,12 @@ return false; } - /* Do not attempt EAS if schedutil is not being used. */ - for_each_cpu(i, cpu_mask) { - policy = cpufreq_cpu_get(i); - if (!policy) { - if (sched_debug()) { - pr_info("rd %*pbl: Checking EAS, cpufreq policy not set for CPU: %d", - cpumask_pr_args(cpu_mask), i); - } - return false; - } - policy_is_ready = sugov_is_cpufreq_governor(policy); - cpufreq_cpu_put(policy); - if (!policy_is_ready) { - if (sched_debug()) { - pr_info("rd %*pbl: Checking EAS, schedutil is mandatory\n", - cpumask_pr_args(cpu_mask)); - } - return false; + if (!cpufreq_ready_for_eas(cpu_mask)) { + if (sched_debug()) { + pr_info("rd %*pbl: Checking EAS: cpufreq is not ready", + cpumask_pr_args(cpu_mask)); } + return false; } return true; From patchwork Fri Mar 7 19:16:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Rafael J. Wysocki" X-Patchwork-Id: 871433 Received: from cloudserver094114.home.pl (cloudserver094114.home.pl [79.96.170.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B7F07254AFD; Fri, 7 Mar 2025 19:42:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=79.96.170.134 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741376572; cv=none; b=H+c1iwKXbC7EOcVAngtCy5PXO5/SO2HSJ6AJAoLzA9BSwnL8uk1EV/tim4RDWrXOHaldbrzfL06JuDQe1Z3VloX2ACTQcol44/AwbQNAtxhyhJ2bG84uPjJ/hQMrxTcZi2+nYYymo4DUjcjFZcrGJlNS5iVkGLpM/Q7wHD8wB7g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741376572; c=relaxed/simple; bh=Ccjz8W1h/ZSWzpBW6Pli5ZjJF81V8/l4pL/JvHejhUs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=e12jHhyrR5mCpqqtJ/I20SS3H6I79W6A1QPtq2g3Ok25btu2Vw7Y2QW9Zw/boimB4aMzaam81s4FPLRn5o4BG5vgcOEk16gSjYJxJyuiSIRft/0LtNbcursmG95mtk8503Ssl3aufe17LEGbS/P36W0k2T6PUMXm0wGUv1dbF90= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net; spf=pass smtp.mailfrom=rjwysocki.net; dkim=pass (2048-bit key) header.d=rjwysocki.net header.i=@rjwysocki.net header.b=Qn79S/7u; arc=none smtp.client-ip=79.96.170.134 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rjwysocki.net header.i=@rjwysocki.net header.b="Qn79S/7u" Received: from localhost (127.0.0.1) (HELO v370.home.net.pl) by /usr/run/smtp (/usr/run/postfix/private/idea_relay_lmtp) via UNIX with SMTP (IdeaSmtpServer 6.3.1) id b980cc6cc07cbb7d; Fri, 7 Mar 2025 20:42:43 +0100 Received: from kreacher.localnet (unknown [195.136.19.94]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by cloudserver094114.home.pl (Postfix) with ESMTPSA id 7FED89A0BFB; Fri, 7 Mar 2025 20:42:42 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=rjwysocki.net; s=dkim; t=1741376563; bh=Ccjz8W1h/ZSWzpBW6Pli5ZjJF81V8/l4pL/JvHejhUs=; h=From:Subject:Date; b=Qn79S/7u59JDTpIdC0OknBcLi+8B89MF3VItl4EYFVt+LQW31c0+2Efd7mNAJkFlN 1LerF2Rb49tqk1+zgbSWHo2Hw97lsvXCOJC5PrxWOh9hZN5tFxM40KVNFeIYI3BE5/ 2vzJm83+TvpSsbkpg2/bYErFA0iGG5dJz0t9dRaNMcWAgG0RiYkgx+3xp3Ctqh9/yf sfQjJA+GzCYlwd8gPt59pK1xaEncHrE32WlgJ8yV2ODPfOJBl35jTKipneRb+FUyF0 S+Iib2++eBvtyl5O4J+c9TD1y1clWjOv4D9gYcAfSKCTQNp7yjoSDlrEY7rUSQeNvq 3nEc1UznCiqhg== From: "Rafael J. Wysocki" To: Linux PM Cc: LKML , Lukasz Luba , Peter Zijlstra , Srinivas Pandruvada , Dietmar Eggemann , Morten Rasmussen , Vincent Guittot , Ricardo Neri , Pierre Gondois , Christian Loehle , Viresh Kumar Subject: [RFC][PATCH v0.3 3/6] cpufreq/sched: Allow .setpolicy() cpufreq drivers to enable EAS Date: Fri, 07 Mar 2025 20:16:13 +0100 Message-ID: <1940620.CQOukoFCf9@rjwysocki.net> In-Reply-To: <22640172.EfDdHjke4D@rjwysocki.net> References: <22640172.EfDdHjke4D@rjwysocki.net> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CLIENT-IP: 195.136.19.94 X-CLIENT-HOSTNAME: 195.136.19.94 X-VADE-SPAMSTATE: clean X-VADE-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgeefvddrtddtgdduudduheefucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecujffqoffgrffnpdggtffipffknecuuegrihhlohhuthemucduhedtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjughrpefhvfevufffkfgjfhgggfgtsehtufertddttdejnecuhfhrohhmpedftfgrfhgrvghlucflrdcuhgihshhotghkihdfuceorhhjfiesrhhjfiihshhotghkihdrnhgvtheqnecuggftrfgrthhtvghrnhepvdffueeitdfgvddtudegueejtdffteetgeefkeffvdeftddttdeuhfegfedvjefhnecukfhppeduleehrddufeeirdduledrleegnecuvehluhhsthgvrhfuihiivgepudenucfrrghrrghmpehinhgvthepudelhedrudefiedrudelrdelgedphhgvlhhopehkrhgvrggthhgvrhdrlhhotggrlhhnvghtpdhmrghilhhfrhhomheprhhjfiesrhhjfiihshhotghkihdrnhgvthdpnhgspghrtghpthhtohepuddvpdhrtghpthhtoheplhhinhhugidqphhmsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhukhgrshiirdhluhgsrgesrghrmhdrtghomhdprhgtphhtthhopehpvghtvghriiesihhnfhhrrgguvggrugdrohhrghdprhgtphhtthhopehsrhhinhhivhgrshdrphgrnhgurhhuvhgruggrsehlihhnuhigrdh X-DCC--Metrics: v370.home.net.pl 1024; Body=12 Fuz1=12 Fuz2=12 From: Rafael J. Wysocki Some cpufreq drivers, like intel_pstate, have built-in governors that are used instead of regular cpufreq governors, schedutil in particular, but they can work with EAS just fine, so allow EAS to be used with those drivers. Signed-off-by: Rafael J. Wysocki --- drivers/cpufreq/cpufreq.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -3053,6 +3053,20 @@ return 0; } +static bool cpufreq_policy_is_good_for_eas(struct cpufreq_policy *policy) +{ + /* + * For EAS compatibility, require that either schedutil is the policy + * governor or the policy is governed directly by the cpufreq driver. + * + * In the latter case, it is assumed that EAS can only be enabled by the + * cpufreq driver itself which will not enable EAS if it does not meet + * the EAS' expectations regarding performance scaling response. + */ + return sugov_is_cpufreq_governor(policy) || (!policy->governor && + policy->policy != CPUFREQ_POLICY_UNKNOWN); +} + bool cpufreq_ready_for_eas(const struct cpumask *cpu_mask) { int i; @@ -3069,7 +3083,7 @@ return false; } - policy_is_ready = sugov_is_cpufreq_governor(policy); + policy_is_ready = cpufreq_policy_is_good_for_eas(policy); cpufreq_cpu_put(policy); if (!policy_is_ready) { pr_debug("rd %*pbl: schedutil is mandatory for EAS\n", From patchwork Fri Mar 7 19:17:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Rafael J. Wysocki" X-Patchwork-Id: 871800 Received: from cloudserver094114.home.pl (cloudserver094114.home.pl [79.96.170.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ACBD624DFEF; Fri, 7 Mar 2025 19:42:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=79.96.170.134 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741376571; cv=none; b=Npb0j1FTFyRvIJksFs5gqJ/vjforUMJfDMkO2mTlmC83ZNRW1kyuins1BQfIfKGn88bQQ22ylwXwKo1AXtk/0Y3ZTYdritR0rZ4MTJj+yp2t93jS7Y6d3VtyaZ9wx89HXjh0Cz1YoOm+ocVVBlE3qGAwzMVCFmY3Z88JYRGb82Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741376571; c=relaxed/simple; bh=mBw2XwgaWgmKzoEZRR2bvaPE7UQBL16C6NnCX0Wta5Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=nXZYz/1qy2Gl32wdSVLdiuffATNOCM4YUZMzKWAik5TurPl7/J86LEFA7iVKr3Rc6qewcbJbeZMxyQUTP8m3OUN+13GpBiGq/hZ81pP//CZZSEbZWmAUSrhI8caiRzXSTbS5FU6Tu/bSgax5TQxEktWgYcnXIFpBj/1SpvOJfBY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net; spf=pass smtp.mailfrom=rjwysocki.net; dkim=pass (2048-bit key) header.d=rjwysocki.net header.i=@rjwysocki.net header.b=p7f9spbx; arc=none smtp.client-ip=79.96.170.134 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rjwysocki.net header.i=@rjwysocki.net header.b="p7f9spbx" Received: from localhost (127.0.0.1) (HELO v370.home.net.pl) by /usr/run/smtp (/usr/run/postfix/private/idea_relay_lmtp) via UNIX with SMTP (IdeaSmtpServer 6.3.1) id 6dc80f473847aa41; Fri, 7 Mar 2025 20:42:42 +0100 Received: from kreacher.localnet (unknown [195.136.19.94]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by cloudserver094114.home.pl (Postfix) with ESMTPSA id 8E5279A0BFB; Fri, 7 Mar 2025 20:42:41 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=rjwysocki.net; s=dkim; t=1741376562; bh=mBw2XwgaWgmKzoEZRR2bvaPE7UQBL16C6NnCX0Wta5Q=; h=From:Subject:Date; b=p7f9spbxinN3tlcyX+fbdZT9357V0mnqQRDMHhNXDRfILK7aNirP5tnpOBgppm9j+ O3pL8OKx8HzHCrPKuhk1g+hgj+P6q9ygG2W6829OY53u8WtlKuaNqY61oKo3eMnW5U bK7OrX1Dze0eXzTD+RFYJc/I5+nFr24WbN1oAmVkuJnvDpvNcSo8wy8LeXqpc/f4/d Y/mmLO9N4tokHhEK01s8+0lnPWAprZ/ha+pdbqHg9wkqislxrgD4x+yT+YLZbXHUN0 AmupJomE3F2WgGkzN9JBwkeQ1d+lRnjSJCEnHDNDbqARczdk0MUpAM5vCyOr4B+1ae nZovTD+Hlawag== From: "Rafael J. Wysocki" To: Linux PM Cc: LKML , Lukasz Luba , Peter Zijlstra , Srinivas Pandruvada , Dietmar Eggemann , Morten Rasmussen , Vincent Guittot , Ricardo Neri , Pierre Gondois , Christian Loehle , Viresh Kumar Subject: [RFC][PATCH v0.3 4/6] PM: EM: Move CPU capacity check to em_adjust_new_capacity() Date: Fri, 07 Mar 2025 20:17:19 +0100 Message-ID: <2667366.Lt9SDvczpP@rjwysocki.net> In-Reply-To: <22640172.EfDdHjke4D@rjwysocki.net> References: <22640172.EfDdHjke4D@rjwysocki.net> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CLIENT-IP: 195.136.19.94 X-CLIENT-HOSTNAME: 195.136.19.94 X-VADE-SPAMSTATE: clean X-VADE-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgeefvddrtddtgdduudduheefucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecujffqoffgrffnpdggtffipffknecuuegrihhlohhuthemucduhedtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjughrpefhvfevufffkfgjfhgggfgtsehtufertddttdejnecuhfhrohhmpedftfgrfhgrvghlucflrdcuhgihshhotghkihdfuceorhhjfiesrhhjfiihshhotghkihdrnhgvtheqnecuggftrfgrthhtvghrnhepvdffueeitdfgvddtudegueejtdffteetgeefkeffvdeftddttdeuhfegfedvjefhnecukfhppeduleehrddufeeirdduledrleegnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehinhgvthepudelhedrudefiedrudelrdelgedphhgvlhhopehkrhgvrggthhgvrhdrlhhotggrlhhnvghtpdhmrghilhhfrhhomheprhhjfiesrhhjfiihshhotghkihdrnhgvthdpnhgspghrtghpthhtohepuddvpdhrtghpthhtoheplhhinhhugidqphhmsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhukhgrshiirdhluhgsrgesrghrmhdrtghomhdprhgtphhtthhopehpvghtvghriiesihhnfhhrrgguvggrugdrohhrghdprhgtphhtthhopehsrhhinhhivhgrshdrphgrnhgurhhuvhgruggrsehlihhnuhigrdh X-DCC--Metrics: v370.home.net.pl 1024; Body=12 Fuz1=12 Fuz2=12 From: Rafael J. Wysocki Move the check of the CPU capacity currently stored in the energy model against the arch_scale_cpu_capacity() value to em_adjust_new_capacity() so it will be done regardless of where the latter is called from. This will be useful when a new em_adjust_new_capacity() caller is added subsequently. While at it, move the pd local variable declaration in em_check_capacity_update() into the loop in which it is used. No intentional functional impact. Signed-off-by: Rafael J. Wysocki --- kernel/power/energy_model.c | 40 +++++++++++++++++----------------------- 1 file changed, 17 insertions(+), 23 deletions(-) --- a/kernel/power/energy_model.c +++ b/kernel/power/energy_model.c @@ -721,10 +721,24 @@ * Adjustment of CPU performance values after boot, when all CPUs capacites * are correctly calculated. */ -static void em_adjust_new_capacity(struct device *dev, +static void em_adjust_new_capacity(unsigned int cpu, struct device *dev, struct em_perf_domain *pd) { + unsigned long cpu_capacity = arch_scale_cpu_capacity(cpu); struct em_perf_table *em_table; + struct em_perf_state *table; + unsigned long em_max_perf; + + rcu_read_lock(); + table = em_perf_state_from_pd(pd); + em_max_perf = table[pd->nr_perf_states - 1].performance; + rcu_read_unlock(); + + if (em_max_perf == cpu_capacity) + return; + + pr_debug("updating cpu%d cpu_cap=%lu old capacity=%lu\n", cpu, + cpu_capacity, em_max_perf); em_table = em_table_dup(pd); if (!em_table) { @@ -740,9 +754,6 @@ static void em_check_capacity_update(void) { cpumask_var_t cpu_done_mask; - struct em_perf_state *table; - struct em_perf_domain *pd; - unsigned long cpu_capacity; int cpu; if (!zalloc_cpumask_var(&cpu_done_mask, GFP_KERNEL)) { @@ -753,7 +764,7 @@ /* Check if CPUs capacity has changed than update EM */ for_each_possible_cpu(cpu) { struct cpufreq_policy *policy; - unsigned long em_max_perf; + struct em_perf_domain *pd; struct device *dev; if (cpumask_test_cpu(cpu, cpu_done_mask)) @@ -776,24 +787,7 @@ cpumask_or(cpu_done_mask, cpu_done_mask, em_span_cpus(pd)); - cpu_capacity = arch_scale_cpu_capacity(cpu); - - rcu_read_lock(); - table = em_perf_state_from_pd(pd); - em_max_perf = table[pd->nr_perf_states - 1].performance; - rcu_read_unlock(); - - /* - * Check if the CPU capacity has been adjusted during boot - * and trigger the update for new performance values. - */ - if (em_max_perf == cpu_capacity) - continue; - - pr_debug("updating cpu%d cpu_cap=%lu old capacity=%lu\n", - cpu, cpu_capacity, em_max_perf); - - em_adjust_new_capacity(dev, pd); + em_adjust_new_capacity(cpu, dev, pd); } free_cpumask_var(cpu_done_mask); From patchwork Fri Mar 7 19:39:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Rafael J. Wysocki" X-Patchwork-Id: 871434 Received: from cloudserver094114.home.pl (cloudserver094114.home.pl [79.96.170.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA6B4183CB0; Fri, 7 Mar 2025 19:42:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=79.96.170.134 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741376571; cv=none; b=iPoXC7qRi3/koYuqtkcL+ff2oKckZXT3ix2abAOPF+noCQD2ZCzysfD8FVIYVTGbzroDnQfDU5dQS+CIZ5pBzYh8DSeiHdOLZ8w+dlT5cDPf5ldnkdr3SWuneiLgggT09aXamijvBS3Y6CdHTLowhgbsOfndR+jX8UepPaMtfgs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741376571; c=relaxed/simple; bh=jSPlM41dWFTbPXF7V5lCIMCIGH9Lb5+mtbSwIByMCl0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=cR2AzoxQKCkpDSuqvLJbYueu+LoXP9xdUCPTlsup4f30Rji4NIRORaGScXuUsJmQuBhSUYTpQN4+Qet9sOpxpEgMs44l9hU2MDnVy8A/JYbIUvhG004iXdOvreH0q9vYiL7yLcaxZGbNYyjC0LlsvVuZts4RbHJe1E9g3gAHn0A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net; spf=pass smtp.mailfrom=rjwysocki.net; dkim=pass (2048-bit key) header.d=rjwysocki.net header.i=@rjwysocki.net header.b=eQLSakU8; arc=none smtp.client-ip=79.96.170.134 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rjwysocki.net header.i=@rjwysocki.net header.b="eQLSakU8" Received: from localhost (127.0.0.1) (HELO v370.home.net.pl) by /usr/run/smtp (/usr/run/postfix/private/idea_relay_lmtp) via UNIX with SMTP (IdeaSmtpServer 6.3.1) id f32e28ba78c6fdc3; Fri, 7 Mar 2025 20:42:41 +0100 Received: from kreacher.localnet (unknown [195.136.19.94]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by cloudserver094114.home.pl (Postfix) with ESMTPSA id A53B59A0BFB; Fri, 7 Mar 2025 20:42:40 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=rjwysocki.net; s=dkim; t=1741376561; bh=jSPlM41dWFTbPXF7V5lCIMCIGH9Lb5+mtbSwIByMCl0=; h=From:Subject:Date; b=eQLSakU86HKhbNp0rrOsCLm8V04zzVHJU6+vfHK4TlXD7aZNSgGUtQ88U2MO6qH8I 7nch8+tnET/wdRgC0gt0GsDvMY3wWqFiI8gT0nZ0dot+zLlR7wKPLi+27Q8nd/vcB9 vpZX3qqcS1MyJKHrwuOEnKiYVD2SG2TF1QAG8qo1LPKAJTEnz3mvh0I0SnwB4Y6W5E Ge8TFAWuI3jRNQT3gJyoUKB47e5E2AcXGby8CNcxpAC0LH8dBE/oNZIR8uG6qwvpPb CXzskM671AgeV3eOyMWpkYR+oyre06A6zDXuEtJlK9v3T1He/4eXUHck2h05thscXD Wffvm7kRPqYTg== From: "Rafael J. Wysocki" To: Linux PM Cc: LKML , Lukasz Luba , Peter Zijlstra , Srinivas Pandruvada , Dietmar Eggemann , Morten Rasmussen , Vincent Guittot , Ricardo Neri , Pierre Gondois , Christian Loehle , Viresh Kumar Subject: [RFC][PATCH v0.3 5/6] PM: EM: Introduce em_adjust_cpu_capacity() Date: Fri, 07 Mar 2025 20:39:34 +0100 Message-ID: <2446858.NG923GbCHz@rjwysocki.net> In-Reply-To: <22640172.EfDdHjke4D@rjwysocki.net> References: <22640172.EfDdHjke4D@rjwysocki.net> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CLIENT-IP: 195.136.19.94 X-CLIENT-HOSTNAME: 195.136.19.94 X-VADE-SPAMSTATE: clean X-VADE-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgeefvddrtddtgdduudduheegucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecujffqoffgrffnpdggtffipffknecuuegrihhlohhuthemucduhedtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjughrpefhvfevufffkfgjfhgggfgtsehtufertddttdejnecuhfhrohhmpedftfgrfhgrvghlucflrdcuhgihshhotghkihdfuceorhhjfiesrhhjfiihshhotghkihdrnhgvtheqnecuggftrfgrthhtvghrnhepvdffueeitdfgvddtudegueejtdffteetgeefkeffvdeftddttdeuhfegfedvjefhnecukfhppeduleehrddufeeirdduledrleegnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehinhgvthepudelhedrudefiedrudelrdelgedphhgvlhhopehkrhgvrggthhgvrhdrlhhotggrlhhnvghtpdhmrghilhhfrhhomheprhhjfiesrhhjfiihshhotghkihdrnhgvthdpnhgspghrtghpthhtohepuddvpdhrtghpthhtoheplhhinhhugidqphhmsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhukhgrshiirdhluhgsrgesrghrmhdrtghomhdprhgtphhtthhopehpvghtvghriiesihhnfhhrrgguvggrugdrohhrghdprhgtphhtthhopehsrhhinhhivhgrshdrphgrnhgurhhuvhgruggrsehlihhnuhigrdh X-DCC--Metrics: v370.home.net.pl 1024; Body=12 Fuz1=12 Fuz2=12 From: Rafael J. Wysocki Add a function for updating the Energy Model for a CPU after its capacity has changed, which subsequently will be used by the intel_pstate driver. An EM_PERF_DOMAIN_ARTIFICIAL check is added to em_adjust_new_capacity() to prevent it from calling em_compute_costs() for an "artificial" perf domain with a NULL cb parameter which would cause it to crash. Signed-off-by: Rafael J. Wysocki --- Note that this function is needed because the performance level values in the EM "state" table need to be adjusted on CPU capacity changes. In the intel_pstate case the cost values associated with them don't change because they are artificial anyway, so replacing the entire table just in order to update the performance level values is a bit wasteful, but it seems to be an exception (in the other cases when the CPU capacity changes, the cost values change too AFAICS). --- include/linux/energy_model.h | 2 ++ kernel/power/energy_model.c | 28 ++++++++++++++++++++++++---- 2 files changed, 26 insertions(+), 4 deletions(-) --- a/include/linux/energy_model.h +++ b/include/linux/energy_model.h @@ -179,6 +179,7 @@ int em_dev_update_chip_binning(struct device *dev); int em_update_performance_limits(struct em_perf_domain *pd, unsigned long freq_min_khz, unsigned long freq_max_khz); +void em_adjust_cpu_capacity(unsigned int cpu); void em_rebuild_sched_domains(void); /** @@ -405,6 +406,7 @@ { return -EINVAL; } +void em_adjust_cpu_capacity(unsigned int cpu) {} static inline void em_rebuild_sched_domains(void) {} #endif --- a/kernel/power/energy_model.c +++ b/kernel/power/energy_model.c @@ -698,10 +698,12 @@ { int ret; - ret = em_compute_costs(dev, em_table->state, NULL, pd->nr_perf_states, - pd->flags); - if (ret) - goto free_em_table; + if (!(pd->flags & EM_PERF_DOMAIN_ARTIFICIAL)) { + ret = em_compute_costs(dev, em_table->state, NULL, + pd->nr_perf_states, pd->flags); + if (ret) + goto free_em_table; + } ret = em_dev_update_perf_domain(dev, em_table); if (ret) @@ -751,6 +753,24 @@ em_recalc_and_update(dev, pd, em_table); } +/** + * em_adjust_cpu_capacity() - Adjust the EM for a CPU after a capacity update. + * @cpu: Target CPU. + * + * Adjust the existing EM for @cpu after a capacity update under the assumption + * that the capacity has been updated in the same way for all of the CPUs in + * the same perf domain. + */ +void em_adjust_cpu_capacity(unsigned int cpu) +{ + struct device *dev = get_cpu_device(cpu); + struct em_perf_domain *pd; + + pd = em_pd_get(dev); + if (pd) + em_adjust_new_capacity(cpu, dev, pd); +} + static void em_check_capacity_update(void) { cpumask_var_t cpu_done_mask; From patchwork Fri Mar 7 19:42:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Rafael J. Wysocki" X-Patchwork-Id: 871801 Received: from cloudserver094114.home.pl (cloudserver094114.home.pl [79.96.170.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0BBE5250BFB; Fri, 7 Mar 2025 19:42:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=79.96.170.134 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741376570; cv=none; b=Lyss55xg/tNbp0TZcFsZ5nE1J3L2vRt5wAHOVuNJ+7bD41I1Lh5k6rIJfsX2EY42sn12XxDd/2s0mQ1I3uhSLydoZ6KAerEDNGd5YudJ1Xam/3XJlHQiwsovDE7FGY+PvdteUNWm0ahPJ3jmoMNI0RmNuRzBFZhexut8laO/D3M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741376570; c=relaxed/simple; bh=DwNi7b0jPHaorTviJwSdwnOCF3DHeAnTIYNUscDbSYI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=NIVplHcwHB6G7YK9NOcejlJnPfwXrc1I05fWeGQMg2aoieO/BcYCY/fKVt6GDnG9uv5f1Q3hajjtaVAdy88K6uiNJF4PvuEE9PmDqmw2VTsd8iH1LmNEhxoYRa8xdG46eEEeCisWwnh+msHLCeOFiIkoKaMuVkgpsSDQ26nF7rU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net; spf=pass smtp.mailfrom=rjwysocki.net; dkim=pass (2048-bit key) header.d=rjwysocki.net header.i=@rjwysocki.net header.b=PZZ9TQBF; arc=none smtp.client-ip=79.96.170.134 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rjwysocki.net header.i=@rjwysocki.net header.b="PZZ9TQBF" Received: from localhost (127.0.0.1) (HELO v370.home.net.pl) by /usr/run/smtp (/usr/run/postfix/private/idea_relay_lmtp) via UNIX with SMTP (IdeaSmtpServer 6.3.1) id 8209ea52667ca11d; Fri, 7 Mar 2025 20:42:40 +0100 Received: from kreacher.localnet (unknown [195.136.19.94]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by cloudserver094114.home.pl (Postfix) with ESMTPSA id 6D62A9A0BFB; Fri, 7 Mar 2025 20:42:39 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=rjwysocki.net; s=dkim; t=1741376560; bh=DwNi7b0jPHaorTviJwSdwnOCF3DHeAnTIYNUscDbSYI=; h=From:Subject:Date; b=PZZ9TQBFL7jVqSGfV6sk6l7gGCrnzavWmssV8pe5CoO7wh1MLz/ucu1suetSin9Tw nAuDjpB2psRvilPClAR3kjHDxQue11DKpdKfUS7/ECpmMRY1Q6XUwu3zB+9GYtOvlJ n4fsfBH97qHM5uq7bI7Ca+g4QiNoIhbuGiCzfE79+uNOnO8TazVq9emXS4DfW4ObOq V0POLJ+phxib3fR1yL9pYiQwN2v+7rbBjj3+z1SNJCL+2OK8LI0Srp5gMb/ZA4csRR IZq03PiQHI1inuJlEqHdkDbHRzScOTVmQfUGPXTi3993cGA9BGYWk95ZvjW2NnxZNh NgiOErt4aRaCw== From: "Rafael J. Wysocki" To: Linux PM Cc: LKML , Lukasz Luba , Peter Zijlstra , Srinivas Pandruvada , Dietmar Eggemann , Morten Rasmussen , Vincent Guittot , Ricardo Neri , Pierre Gondois , Christian Loehle Subject: [RFC][PATCH v0.3 6/6] cpufreq: intel_pstate: EAS support for hybrid platforms Date: Fri, 07 Mar 2025 20:42:28 +0100 Message-ID: <2028801.yKVeVyVuyW@rjwysocki.net> In-Reply-To: <22640172.EfDdHjke4D@rjwysocki.net> References: <22640172.EfDdHjke4D@rjwysocki.net> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CLIENT-IP: 195.136.19.94 X-CLIENT-HOSTNAME: 195.136.19.94 X-VADE-SPAMSTATE: clean X-VADE-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgeefvddrtddtgdduudduheefucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecujffqoffgrffnpdggtffipffknecuuegrihhlohhuthemucduhedtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjughrpefhvfevufffkfgjfhgggfgtsehtufertddttdejnecuhfhrohhmpedftfgrfhgrvghlucflrdcuhgihshhotghkihdfuceorhhjfiesrhhjfiihshhotghkihdrnhgvtheqnecuggftrfgrthhtvghrnhepvdffueeitdfgvddtudegueejtdffteetgeefkeffvdeftddttdeuhfegfedvjefhnecukfhppeduleehrddufeeirdduledrleegnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehinhgvthepudelhedrudefiedrudelrdelgedphhgvlhhopehkrhgvrggthhgvrhdrlhhotggrlhhnvghtpdhmrghilhhfrhhomheprhhjfiesrhhjfiihshhotghkihdrnhgvthdpnhgspghrtghpthhtohepuddupdhrtghpthhtoheplhhinhhugidqphhmsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhukhgrshiirdhluhgsrgesrghrmhdrtghomhdprhgtphhtthhopehpvghtvghriiesihhnfhhrrgguvggrugdrohhrghdprhgtphhtthhopehsrhhinhhivhgrshdrphgrnhgurhhuvhgruggrsehlihhnuhigrdh X-DCC--Metrics: v370.home.net.pl 1024; Body=11 Fuz1=11 Fuz2=11 From: Rafael J. Wysocki Modify intel_pstate to register EM perf domains for CPUs on hybrid platforms and enable EAS on them. This change is targeting platforms (for example, Lunar Lake) where the "little" CPUs (E-cores) are always more energy-efficient than the "big" or "performance" CPUs (P-cores) when run at the same HWP performance level, so it is sufficient to tell EAS that E-cores are always preferred (so long as there is enough spare capacity on one of them to run the given task). However, migrating tasks between CPUs of the same type too often is not desirable because it may hurt both performance and energy efficiency due to leaving warm caches behind. For this reason, register a separate perf domain for each CPU and assign costs for them so that the cost mostly depends on the CPU type, but there is also a small component of it depending on the performance level (utilization) which allows to avoid substantial load imbalances between CPUs of the same type. The observation used here is that the IPC metric value for a given CPU is inversely proportional to its performance-to-frequency scaling factor and the cost of running on it can be assumed to be roughly proportional to that IPC ratio (in principle, the higher the IPC ratio, the more resources are utilized when running at a given frequency, so the cost should be higher). This main component of the cost is amended with a small addition proportional performance. EM perf domains for all CPUs that are online during system startup are registered at the driver initialization time, after asymmetric capacity support has been enabled. For the CPUs that become online later, EM perf domains are registered after setting the asymmetric capacity for them. Signed-off-by: Rafael J. Wysocki --- drivers/cpufreq/intel_pstate.c | 132 +++++++++++++++++++++++++++++++++++++++-- 1 file changed, 127 insertions(+), 5 deletions(-) --- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c @@ -44,6 +44,8 @@ #define INTEL_CPUFREQ_TRANSITION_DELAY_HWP 5000 #define INTEL_CPUFREQ_TRANSITION_DELAY 500 +#define INTEL_PSTATE_CORE_SCALING 100000 + #ifdef CONFIG_ACPI #include #include @@ -221,6 +223,7 @@ * @sched_flags: Store scheduler flags for possible cross CPU update * @hwp_boost_min: Last HWP boosted min performance * @suspended: Whether or not the driver has been suspended. + * @em_registered: If set, an energy model has been registered. * @hwp_notify_work: workqueue for HWP notifications. * * This structure stores per CPU instance data for all CPUs. @@ -260,6 +263,9 @@ unsigned int sched_flags; u32 hwp_boost_min; bool suspended; +#ifdef CONFIG_ENERGY_MODEL + bool em_registered; +#endif struct delayed_work hwp_notify_work; }; @@ -311,7 +317,7 @@ static inline int core_get_scaling(void) { - return 100000; + return INTEL_PSTATE_CORE_SCALING; } #ifdef CONFIG_ACPI @@ -945,12 +951,105 @@ */ static DEFINE_MUTEX(hybrid_capacity_lock); +#ifdef CONFIG_ENERGY_MODEL +#define HYBRID_EM_STATE_COUNT 4 + +static int hybrid_active_power(struct device *dev, unsigned long *power, + unsigned long *freq) +{ + /* + * Create "utilization bins" of 0-40%, 40%-60%, 60%-80%, and 80%-100% + * of the maximum capacity such that two CPUs of the same type will be + * regarded as equally attractive if the utilization of each of them + * falls into the same bin, which should prevent tasks from being + * migrated between them too often. + * + * For this purpose, return the "frequency" of 2 for the first + * performance level and otherwise leave the value set by the caller. + */ + if (!*freq) + *freq = 2; + + /* No power information. */ + *power = EM_MAX_POWER; + + return 0; +} + +static int hybrid_get_cost(struct device *dev, unsigned long freq, + unsigned long *cost) +{ + struct pstate_data *pstate = &all_cpu_data[dev->id]->pstate; + + /* + * The smaller the perf-to-frequency scaling factor, the larger the IPC + * ratio between the given CPU and the least capable CPU in the system. + * Regard that IPC ratio as the primary cost component and assume that + * the scaling factors for different CPU types will differ by at least + * 5% and they will not be above INTEL_PSTATE_CORE_SCALING. + * + * Add the freq value to the cost, so that the cost of running on CPUs + * of the same type in different "utilization bins" is different. + */ + *cost = div_u64(100ULL * INTEL_PSTATE_CORE_SCALING, pstate->scaling) + freq; + + return 0; +} + +static bool hybrid_register_perf_domain(unsigned int cpu) +{ + static const struct em_data_callback cb + = EM_ADV_DATA_CB(hybrid_active_power, hybrid_get_cost); + struct cpudata *cpudata = all_cpu_data[cpu]; + struct device *cpu_dev; + + /* + * Registering EM perf domains without enabling asymmetric CPU capacity + * support is not really useful and one domain should not be registered + * more than once. + */ + if (!hybrid_max_perf_cpu || cpudata->em_registered) + return false; + + cpu_dev = get_cpu_device(cpu); + if (!cpu_dev) + return false; + + if (em_dev_register_perf_domain(cpu_dev, HYBRID_EM_STATE_COUNT, &cb, + cpumask_of(cpu), false)) + return false; + + cpudata->em_registered = true; + + return true; +} + +static void hybrid_register_all_perf_domains(void) +{ + unsigned int cpu; + + for_each_online_cpu(cpu) + hybrid_register_perf_domain(cpu); +} + +static void hybrid_update_perf_domain(struct cpudata *cpu) +{ + if (cpu->em_registered) + em_adjust_cpu_capacity(cpu->cpu); +} +#else /* !CONFIG_ENERGY_MODEL */ +static inline bool hybrid_register_perf_domain(unsigned int cpu) { return false; } +static inline void hybrid_register_all_perf_domains(void) {} +static inline void hybrid_update_perf_domain(struct cpudata *cpu) {} +#endif /* CONFIG_ENERGY_MODEL */ + static void hybrid_set_cpu_capacity(struct cpudata *cpu) { arch_set_cpu_capacity(cpu->cpu, cpu->capacity_perf, hybrid_max_perf_cpu->capacity_perf, cpu->capacity_perf, cpu->pstate.max_pstate_physical); + hybrid_update_perf_domain(cpu); pr_debug("CPU%d: perf = %u, max. perf = %u, base perf = %d\n", cpu->cpu, cpu->capacity_perf, hybrid_max_perf_cpu->capacity_perf, @@ -1039,6 +1138,11 @@ guard(mutex)(&hybrid_capacity_lock); __hybrid_refresh_cpu_capacity_scaling(); + /* + * Perf domains are not registered before setting hybrid_max_perf_cpu, + * so register them all after setting up CPU capacity scaling. + */ + hybrid_register_all_perf_domains(); } static void hybrid_init_cpu_capacity_scaling(bool refresh) @@ -1066,7 +1170,7 @@ hybrid_refresh_cpu_capacity_scaling(); /* * Disabling ITMT causes sched domains to be rebuilt to disable asym - * packing and enable asym capacity. + * packing and enable asym capacity and EAS. */ sched_clear_itmt_support(); } @@ -1144,6 +1248,14 @@ } hybrid_set_cpu_capacity(cpu); + /* + * If the CPU was offline to start with and it is going online for the + * first time, a perf domain needs to be registered for it if hybrid + * capacity scaling has been enabled already. In that case, sched + * domains need to be rebuilt to take the new perf domain into account. + */ + if (hybrid_register_perf_domain(cpu->cpu)) + em_rebuild_sched_domains(); unlock: mutex_unlock(&hybrid_capacity_lock); @@ -3416,6 +3528,8 @@ static int intel_pstate_update_status(const char *buf, size_t size) { + int ret = -EINVAL; + if (size == 3 && !strncmp(buf, "off", size)) { if (!intel_pstate_driver) return -EINVAL; @@ -3425,6 +3539,8 @@ cpufreq_unregister_driver(intel_pstate_driver); intel_pstate_driver_cleanup(); + /* Trigger EAS support reconfiguration in case it was used. */ + rebuild_sched_domains_energy(); return 0; } @@ -3436,7 +3552,13 @@ cpufreq_unregister_driver(intel_pstate_driver); } - return intel_pstate_register_driver(&intel_pstate); + ret = intel_pstate_register_driver(&intel_pstate); + /* + * If the previous status had been "passive" and the schedutil + * governor had been used, it disabled EAS on exit, so trigger + * sched domains rebuild in case EAS needs to be enabled again. + */ + rebuild_sched_domains_energy(); } if (size == 7 && !strncmp(buf, "passive", size)) { @@ -3448,10 +3570,10 @@ intel_pstate_sysfs_hide_hwp_dynamic_boost(); } - return intel_pstate_register_driver(&intel_cpufreq); + ret = intel_pstate_register_driver(&intel_cpufreq); } - return -EINVAL; + return ret; } static int no_load __initdata;