From patchwork Mon Jul 12 06:03:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 475895 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36235C07E99 for ; Mon, 12 Jul 2021 06:58:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 216E161413 for ; Mon, 12 Jul 2021 06:58:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240684AbhGLHBQ (ORCPT ); Mon, 12 Jul 2021 03:01:16 -0400 Received: from mail.kernel.org ([198.145.29.99]:35486 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241958AbhGLHAs (ORCPT ); Mon, 12 Jul 2021 03:00:48 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id BD1D5611C2; Mon, 12 Jul 2021 06:57:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1626073080; bh=G3HkHgxYvSNq/jgCvcqsDicGxcTj7bqjdex/ftaUcKk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VuLEvR2q/WVm8ntIH3jST4ONcIyEKrSDIqX+41PRa6e8J285u399dlv5nRj12YZZx +4pb2OOtE7lGw2wJfEqkYYZefbIetk96AtwyJtN2cljT48tMm9BcQWxIpIl0h+s8Wc /3FVCQxnMvYMFQ51Cle/djy2JUZcxExsRekbVOAQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Lukasz Luba , "Peter Zijlstra (Intel)" , Viresh Kumar , Sasha Levin Subject: [PATCH 5.12 118/700] thermal/cpufreq_cooling: Update offline CPUs per-cpu thermal_pressure Date: Mon, 12 Jul 2021 08:03:21 +0200 Message-Id: <20210712060941.622648144@linuxfoundation.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210712060924.797321836@linuxfoundation.org> References: <20210712060924.797321836@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Lukasz Luba [ Upstream commit 2ad8ccc17d1e4270cf65a3f2a07a7534aa23e3fb ] The thermal pressure signal gives information to the scheduler about reduced CPU capacity due to thermal. It is based on a value stored in a per-cpu 'thermal_pressure' variable. The online CPUs will get the new value there, while the offline won't. Unfortunately, when the CPU is back online, the value read from per-cpu variable might be wrong (stale data). This might affect the scheduler decisions, since it sees the CPU capacity differently than what is actually available. Fix it by making sure that all online+offline CPUs would get the proper value in their per-cpu variable when thermal framework sets capping. Fixes: f12e4f66ab6a3 ("thermal/cpu-cooling: Update thermal pressure in case of a maximum frequency capping") Signed-off-by: Lukasz Luba Signed-off-by: Peter Zijlstra (Intel) Acked-by: Viresh Kumar Link: https://lore.kernel.org/r/20210614191030.22241-1-lukasz.luba@arm.com Signed-off-by: Sasha Levin --- drivers/thermal/cpufreq_cooling.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c index 6956581ed7a4..b8ded3aef371 100644 --- a/drivers/thermal/cpufreq_cooling.c +++ b/drivers/thermal/cpufreq_cooling.c @@ -487,7 +487,7 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev, ret = freq_qos_update_request(&cpufreq_cdev->qos_req, frequency); if (ret >= 0) { cpufreq_cdev->cpufreq_state = state; - cpus = cpufreq_cdev->policy->cpus; + cpus = cpufreq_cdev->policy->related_cpus; max_capacity = arch_scale_cpu_capacity(cpumask_first(cpus)); capacity = frequency * max_capacity; capacity /= cpufreq_cdev->policy->cpuinfo.max_freq;