From patchwork Thu Nov 19 07:38:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 328394 Delivered-To: patch@linaro.org Received: by 2002:a17:907:2110:0:0:0:0 with SMTP id qn16csp131226ejb; Wed, 18 Nov 2020 23:39:29 -0800 (PST) X-Google-Smtp-Source: ABdhPJyU5EHWUr52Tjy+ojvIm/mSVGsAAEDLUzwtCvHi2GeR5GkQTqpjMCt727okSG80d3qLsIi7 X-Received: by 2002:a17:906:7e55:: with SMTP id z21mr26760988ejr.154.1605771569025; Wed, 18 Nov 2020 23:39:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1605771569; cv=none; d=google.com; s=arc-20160816; b=HqifvB/Iebv+E1n0JlWn4eJHN8CK/dw4hR9KuK7Qv5+MspsZ+6Lyt+8Rn3udt9lkcd CALURoruvRp83sS5knM6ezDc+4YMB4MJUCv9J+al6+ON8APm4j++O7D2QXjBn2lZb7tp Uafy6IBc3b68CcaJJRL1/SCEWmp44Nfl9ZCAh3S7Vw92KvrKoO1MYHAZbbe7ZpaTONm4 Ribln2kG/YaGthkSOLksE57cRV/eQJFj4K6NImv+pOMM6M9U/Ml+YtuVOIDkj8I8INKE lkrHzSGzXJnnDZLzOQmHV9hfwgxbRvgpQqmn4i3aDuQVDHY0GjJlbkzKA5jFBYMSirPE sR9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=qfSBkuFNN8lqF35HgNFGXb7/+prYzyUutqgkPAXlbnc=; b=moHD1KHUsR1gv7xh7vkG20Itpx99DzJN0EUJFSrnus0iQZjwJ6M8tglXF3d+E0RiFY PCB/o+pYA4wY90/Rk33xCkRORK8xpQSKIKE4fbLDhRZT4psTUplr2pgntii4gG8CS53R 8apaocG6rgzHKS3TlgU7DpcCaarIt7luSN/RhHTOAaOoreza6asSFnTcbkpdeT+otblD ltphd7mv9mrLzAdrRVWkiik0aaS+mon2GvSpcKLO6UQj2kURVOvk55jXjeL8Kufd/SI+ u23zvGBTykoD4B0K/nnDR715zopOoVCwlnRLMA3n8W4aH81ZluRlRxcUQsNx4T4j+z3g 3Zlg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=bboK+Fy5; spf=pass (google.com: domain of linux-pm-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id oo20si2765966ejb.400.2020.11.18.23.39.28; Wed, 18 Nov 2020 23:39:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-pm-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=bboK+Fy5; spf=pass (google.com: domain of linux-pm-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725802AbgKSHiW (ORCPT + 8 others); Thu, 19 Nov 2020 02:38:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56864 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725648AbgKSHiW (ORCPT ); Thu, 19 Nov 2020 02:38:22 -0500 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5029FC0613D4 for ; Wed, 18 Nov 2020 23:38:22 -0800 (PST) Received: by mail-pf1-x443.google.com with SMTP id t8so3627954pfg.8 for ; Wed, 18 Nov 2020 23:38:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qfSBkuFNN8lqF35HgNFGXb7/+prYzyUutqgkPAXlbnc=; b=bboK+Fy5/uK8dzSKEVIojFrQAKoJ7KnVrpceEKXgv7ez1vWPDCt4kqZN45DIlmDJTS 3wHzBt4EUXz20/2dlViXp9NZQe/OE0/CvOmzketS88rZTdQUtrPb6fT1w8vEyCPtdT0v +w8kJCR2rMra7mUhEzYyf22vvEI9RKlhPKv5HiYmnLCsIiFEcOvti16/iWeRhNOO89/W ZFeG1nXlA8UmtP1McQOg0kdVwnBx01w6nmo3Ta8kHh3d9a0rZDUe6GA0V9I/S2GIDBGM alE7TKcubooDMMoBmkE5ruEqAWLiUFpJWY75Wq1Pb7IA52hyKVc2dcML/OpLqmzpP056 ONLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qfSBkuFNN8lqF35HgNFGXb7/+prYzyUutqgkPAXlbnc=; b=BNBfLqJkbjjfXiyKiz4gMnYQbJ+Ln7gdTYXsYzbtfDZG2Zpsn6TPh//Rfjpj2waeu6 BrGjtgLaxGIeE46qKOqCrbMIG7YjkUgFitPbcz3zea1nQdKtTqFfUYb3zkL+ZfCgcZSu 95j+DZ8bZ3XzY0C65F/expZ5hZ4q7cbnVDLSKFihINC2b7QPFGuQzMHRKt3zBzZIoxM6 UWINbmGZg0dOBFiHu1WVbnI6On5R2HxqFEtc08o0GUZ3TOW1YtebQP7bqgkGxf9KJW2e BMaNd4MaWB16t8UDajKBUlIyE6yIr6ue1clD8ajsdarmB3EzeSJBl+kcc336X1omxKmI AOxA== X-Gm-Message-State: AOAM531uMfjTlQzFIVrbt6PTLJpdX7MyAulPOYsVonSlBWy5ZBXMBsj0 MdUikhhk0a6FCX+RNCt65H2d6g== X-Received: by 2002:a63:6f4c:: with SMTP id k73mr11522355pgc.319.1605771501681; Wed, 18 Nov 2020 23:38:21 -0800 (PST) Received: from localhost ([122.172.12.172]) by smtp.gmail.com with ESMTPSA id m73sm13402907pfd.106.2020.11.18.23.38.20 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 18 Nov 2020 23:38:21 -0800 (PST) From: Viresh Kumar To: Ingo Molnar , Peter Zijlstra , Vincent Guittot , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , "Rafael J. Wysocki" , Viresh Kumar Cc: linux-kernel@vger.kernel.org, Quentin Perret , Lukasz Luba , linux-pm@vger.kernel.org Subject: [PATCH V3 1/2] sched/core: Rename and move schedutil_cpu_util() to core.c Date: Thu, 19 Nov 2020 13:08:07 +0530 Message-Id: X-Mailer: git-send-email 2.25.0.rc1.19.g042ed3e048af In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org There is nothing schedutil specific in schedutil_cpu_util(), move it to core.c and rename it to sched_cpu_util(), so it can be used from other parts of the kernel as well. The cpufreq_cooling stuff will make use of this in a later commit. Signed-off-by: Viresh Kumar --- include/linux/sched.h | 21 ++++++ kernel/sched/core.c | 115 ++++++++++++++++++++++++++++++ kernel/sched/cpufreq_schedutil.c | 116 +------------------------------ kernel/sched/fair.c | 6 +- kernel/sched/sched.h | 31 +-------- 5 files changed, 145 insertions(+), 144 deletions(-) -- 2.25.0.rc1.19.g042ed3e048af diff --git a/include/linux/sched.h b/include/linux/sched.h index 063cd120b459..926b944dae5e 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1926,6 +1926,27 @@ extern long sched_getaffinity(pid_t pid, struct cpumask *mask); #define TASK_SIZE_OF(tsk) TASK_SIZE #endif +#ifdef CONFIG_SMP +/** + * enum cpu_util_type - CPU utilization type + * @FREQUENCY_UTIL: Utilization used to select frequency + * @ENERGY_UTIL: Utilization used during energy calculation + * + * The utilization signals of all scheduling classes (CFS/RT/DL) and IRQ time + * need to be aggregated differently depending on the usage made of them. This + * enum is used within sched_cpu_util() to differentiate the types of + * utilization expected by the callers, and adjust the aggregation accordingly. + */ +enum cpu_util_type { + FREQUENCY_UTIL, + ENERGY_UTIL, +}; + +/* Returns effective CPU utilization, as seen by the scheduler */ +unsigned long sched_cpu_util(int cpu, enum cpu_util_type type, + unsigned long max); +#endif /* CONFIG_SMP */ + #ifdef CONFIG_RSEQ /* diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d2003a7d5ab5..845c976ccd53 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5117,6 +5117,121 @@ struct task_struct *idle_task(int cpu) return cpu_rq(cpu)->idle; } +#ifdef CONFIG_SMP +/* + * This function computes an effective utilization for the given CPU, to be + * used for frequency selection given the linear relation: f = u * f_max. + * + * The scheduler tracks the following metrics: + * + * cpu_util_{cfs,rt,dl,irq}() + * cpu_bw_dl() + * + * Where the cfs,rt and dl util numbers are tracked with the same metric and + * synchronized windows and are thus directly comparable. + * + * The cfs,rt,dl utilization are the running times measured with rq->clock_task + * which excludes things like IRQ and steal-time. These latter are then accrued + * in the irq utilization. + * + * The DL bandwidth number otoh is not a measured metric but a value computed + * based on the task model parameters and gives the minimal utilization + * required to meet deadlines. + */ +unsigned long effective_cpu_util(int cpu, unsigned long util_cfs, + unsigned long max, enum cpu_util_type type, + struct task_struct *p) +{ + unsigned long dl_util, util, irq; + struct rq *rq = cpu_rq(cpu); + + if (!uclamp_is_used() && + type == FREQUENCY_UTIL && rt_rq_is_runnable(&rq->rt)) { + return max; + } + + /* + * Early check to see if IRQ/steal time saturates the CPU, can be + * because of inaccuracies in how we track these -- see + * update_irq_load_avg(). + */ + irq = cpu_util_irq(rq); + if (unlikely(irq >= max)) + return max; + + /* + * Because the time spend on RT/DL tasks is visible as 'lost' time to + * CFS tasks and we use the same metric to track the effective + * utilization (PELT windows are synchronized) we can directly add them + * to obtain the CPU's actual utilization. + * + * CFS and RT utilization can be boosted or capped, depending on + * utilization clamp constraints requested by currently RUNNABLE + * tasks. + * When there are no CFS RUNNABLE tasks, clamps are released and + * frequency will be gracefully reduced with the utilization decay. + */ + util = util_cfs + cpu_util_rt(rq); + if (type == FREQUENCY_UTIL) + util = uclamp_rq_util_with(rq, util, p); + + dl_util = cpu_util_dl(rq); + + /* + * For frequency selection we do not make cpu_util_dl() a permanent part + * of this sum because we want to use cpu_bw_dl() later on, but we need + * to check if the CFS+RT+DL sum is saturated (ie. no idle time) such + * that we select f_max when there is no idle time. + * + * NOTE: numerical errors or stop class might cause us to not quite hit + * saturation when we should -- something for later. + */ + if (util + dl_util >= max) + return max; + + /* + * OTOH, for energy computation we need the estimated running time, so + * include util_dl and ignore dl_bw. + */ + if (type == ENERGY_UTIL) + util += dl_util; + + /* + * There is still idle time; further improve the number by using the + * irq metric. Because IRQ/steal time is hidden from the task clock we + * need to scale the task numbers: + * + * max - irq + * U' = irq + --------- * U + * max + */ + util = scale_irq_capacity(util, irq, max); + util += irq; + + /* + * Bandwidth required by DEADLINE must always be granted while, for + * FAIR and RT, we use blocked utilization of IDLE CPUs as a mechanism + * to gracefully reduce the frequency when no tasks show up for longer + * periods of time. + * + * Ideally we would like to set bw_dl as min/guaranteed freq and util + + * bw_dl as requested freq. However, cpufreq is not yet ready for such + * an interface. So, we only do the latter for now. + */ + if (type == FREQUENCY_UTIL) + util += cpu_bw_dl(rq); + + return min(max, util); +} + +unsigned long sched_cpu_util(int cpu, enum cpu_util_type type, + unsigned long max) +{ + return effective_cpu_util(cpu, cpu_util_cfs(cpu_rq(cpu)), max, type, + NULL); +} +#endif /* CONFIG_SMP */ + /** * find_process_by_pid - find a process with a matching PID value. * @pid: the pid in question. diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index e254745a82cb..a6de75c8b984 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -169,122 +169,12 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy, return cpufreq_driver_resolve_freq(policy, freq); } -/* - * This function computes an effective utilization for the given CPU, to be - * used for frequency selection given the linear relation: f = u * f_max. - * - * The scheduler tracks the following metrics: - * - * cpu_util_{cfs,rt,dl,irq}() - * cpu_bw_dl() - * - * Where the cfs,rt and dl util numbers are tracked with the same metric and - * synchronized windows and are thus directly comparable. - * - * The cfs,rt,dl utilization are the running times measured with rq->clock_task - * which excludes things like IRQ and steal-time. These latter are then accrued - * in the irq utilization. - * - * The DL bandwidth number otoh is not a measured metric but a value computed - * based on the task model parameters and gives the minimal utilization - * required to meet deadlines. - */ -unsigned long schedutil_cpu_util(int cpu, unsigned long util_cfs, - unsigned long max, enum schedutil_type type, - struct task_struct *p) -{ - unsigned long dl_util, util, irq; - struct rq *rq = cpu_rq(cpu); - - if (!uclamp_is_used() && - type == FREQUENCY_UTIL && rt_rq_is_runnable(&rq->rt)) { - return max; - } - - /* - * Early check to see if IRQ/steal time saturates the CPU, can be - * because of inaccuracies in how we track these -- see - * update_irq_load_avg(). - */ - irq = cpu_util_irq(rq); - if (unlikely(irq >= max)) - return max; - - /* - * Because the time spend on RT/DL tasks is visible as 'lost' time to - * CFS tasks and we use the same metric to track the effective - * utilization (PELT windows are synchronized) we can directly add them - * to obtain the CPU's actual utilization. - * - * CFS and RT utilization can be boosted or capped, depending on - * utilization clamp constraints requested by currently RUNNABLE - * tasks. - * When there are no CFS RUNNABLE tasks, clamps are released and - * frequency will be gracefully reduced with the utilization decay. - */ - util = util_cfs + cpu_util_rt(rq); - if (type == FREQUENCY_UTIL) - util = uclamp_rq_util_with(rq, util, p); - - dl_util = cpu_util_dl(rq); - - /* - * For frequency selection we do not make cpu_util_dl() a permanent part - * of this sum because we want to use cpu_bw_dl() later on, but we need - * to check if the CFS+RT+DL sum is saturated (ie. no idle time) such - * that we select f_max when there is no idle time. - * - * NOTE: numerical errors or stop class might cause us to not quite hit - * saturation when we should -- something for later. - */ - if (util + dl_util >= max) - return max; - - /* - * OTOH, for energy computation we need the estimated running time, so - * include util_dl and ignore dl_bw. - */ - if (type == ENERGY_UTIL) - util += dl_util; - - /* - * There is still idle time; further improve the number by using the - * irq metric. Because IRQ/steal time is hidden from the task clock we - * need to scale the task numbers: - * - * max - irq - * U' = irq + --------- * U - * max - */ - util = scale_irq_capacity(util, irq, max); - util += irq; - - /* - * Bandwidth required by DEADLINE must always be granted while, for - * FAIR and RT, we use blocked utilization of IDLE CPUs as a mechanism - * to gracefully reduce the frequency when no tasks show up for longer - * periods of time. - * - * Ideally we would like to set bw_dl as min/guaranteed freq and util + - * bw_dl as requested freq. However, cpufreq is not yet ready for such - * an interface. So, we only do the latter for now. - */ - if (type == FREQUENCY_UTIL) - util += cpu_bw_dl(rq); - - return min(max, util); -} - static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu) { - struct rq *rq = cpu_rq(sg_cpu->cpu); - unsigned long util = cpu_util_cfs(rq); - unsigned long max = arch_scale_cpu_capacity(sg_cpu->cpu); - - sg_cpu->max = max; - sg_cpu->bw_dl = cpu_bw_dl(rq); + sg_cpu->max = arch_scale_cpu_capacity(sg_cpu->cpu); + sg_cpu->bw_dl = cpu_bw_dl(cpu_rq(sg_cpu->cpu)); - return schedutil_cpu_util(sg_cpu->cpu, util, max, FREQUENCY_UTIL, NULL); + return sched_cpu_util(sg_cpu->cpu, FREQUENCY_UTIL, sg_cpu->max); } /** diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 290f9e38378c..0e1c8eb7ad53 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6499,7 +6499,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd) * is already enough to scale the EM reported power * consumption at the (eventually clamped) cpu_capacity. */ - sum_util += schedutil_cpu_util(cpu, util_cfs, cpu_cap, + sum_util += effective_cpu_util(cpu, util_cfs, cpu_cap, ENERGY_UTIL, NULL); /* @@ -6509,7 +6509,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd) * NOTE: in case RT tasks are running, by default the * FREQUENCY_UTIL's utilization can be max OPP. */ - cpu_util = schedutil_cpu_util(cpu, util_cfs, cpu_cap, + cpu_util = effective_cpu_util(cpu, util_cfs, cpu_cap, FREQUENCY_UTIL, tsk); max_util = max(max_util, cpu_util); } @@ -6607,7 +6607,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) * IOW, placing the task there would make the CPU * overutilized. Take uclamp into account to see how * much capacity we can get out of the CPU; this is - * aligned with schedutil_cpu_util(). + * aligned with sched_cpu_util(). */ util = uclamp_rq_util_with(cpu_rq(cpu), util, p); if (!fits_capacity(util, cpu_cap)) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index df80bfcea92e..4fab3b930ace 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2484,27 +2484,9 @@ static inline unsigned long capacity_orig_of(int cpu) { return cpu_rq(cpu)->cpu_capacity_orig; } -#endif - -/** - * enum schedutil_type - CPU utilization type - * @FREQUENCY_UTIL: Utilization used to select frequency - * @ENERGY_UTIL: Utilization used during energy calculation - * - * The utilization signals of all scheduling classes (CFS/RT/DL) and IRQ time - * need to be aggregated differently depending on the usage made of them. This - * enum is used within schedutil_freq_util() to differentiate the types of - * utilization expected by the callers, and adjust the aggregation accordingly. - */ -enum schedutil_type { - FREQUENCY_UTIL, - ENERGY_UTIL, -}; -#ifdef CONFIG_CPU_FREQ_GOV_SCHEDUTIL - -unsigned long schedutil_cpu_util(int cpu, unsigned long util_cfs, - unsigned long max, enum schedutil_type type, +unsigned long effective_cpu_util(int cpu, unsigned long util_cfs, + unsigned long max, enum cpu_util_type type, struct task_struct *p); static inline unsigned long cpu_bw_dl(struct rq *rq) @@ -2533,14 +2515,7 @@ static inline unsigned long cpu_util_rt(struct rq *rq) { return READ_ONCE(rq->avg_rt.util_avg); } -#else /* CONFIG_CPU_FREQ_GOV_SCHEDUTIL */ -static inline unsigned long schedutil_cpu_util(int cpu, unsigned long util_cfs, - unsigned long max, enum schedutil_type type, - struct task_struct *p) -{ - return 0; -} -#endif /* CONFIG_CPU_FREQ_GOV_SCHEDUTIL */ +#endif #ifdef CONFIG_HAVE_SCHED_AVG_IRQ static inline unsigned long cpu_util_irq(struct rq *rq) From patchwork Thu Nov 19 07:38:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 328395 Delivered-To: patch@linaro.org Received: by 2002:a17:907:2110:0:0:0:0 with SMTP id qn16csp131247ejb; Wed, 18 Nov 2020 23:39:30 -0800 (PST) X-Google-Smtp-Source: ABdhPJzO4TgJ+4zxGKk76Uk5SLCKhJrq8Ixb2F5GUfRhBid3NmAFBle34dS4EbTVs0kxB6xo109S X-Received: by 2002:a17:906:bcd4:: with SMTP id lw20mr26831322ejb.527.1605771570388; Wed, 18 Nov 2020 23:39:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1605771570; cv=none; d=google.com; s=arc-20160816; b=os0BipqA9RdL+VlHBFMvhg80O5eyXK6VegjMeMA1zgFxg7aD4YI8VTKfMiQFQBMsoi ZRFFnOF4xiLFgyWcaI4H9RHYqNBVU9M+py9dp4QpseFRUqSMJbEQ70yg4S6W/iLG1Tk+ M+RGXQSOr6ARLA4VGaGByLleeEJOz1lGxIw0DO5urCLSnjQaTDCR8tax6ETIg1R13gpK UBm0AhWC9ulke9PvNPn24FmJRBsPq00nXMeuIZbfOrd5BuWTHf2cYKoxxR0ptfMSGpEr 5uuMkFXXWA6IdZa0Jr0Dw/PuTrAp7wWbpD4obD/69iSPU0TSAY8xBEhTIeGJvgBVQU2H yBTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=oHrMQUbqDwzagP4Y2oPS+XW/6aunUOQ8w5HkGn8duRY=; b=oIioEYacwBG8p9bMoyhLvcmJewMbN8l+z/eVaZyTiYqMpdI2HNFAUVeBvtyMFYPCv3 7wUrLi3fg9wlYOuxazmTzhPGHVFQFI5Cf1mrR8EpIh8bna1yE/7Z1y6m2c8/SLbUPyy3 sYbjq0GvBo8KvzWUm3y0dVmMQlQWKFkjbxxKnjvZSFno9A49LOHZJrr/fsDEG2c+2DBD ZL1kRQnUeY4FH8QMu9sEFrlNeUvHVu1Ijfds/Z0gsxZApPudtsAyz8M1HRShrpKZ+4MM xxCvnLt2HS4XODb6z8chdFcYki5uOUWq3A26dQTebQPQJHuNpmR2G81KxxEDdINzs/Cb 3kAA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=oUjbWw2j; spf=pass (google.com: domain of linux-pm-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id oo20si2765966ejb.400.2020.11.18.23.39.29; Wed, 18 Nov 2020 23:39:30 -0800 (PST) Received-SPF: pass (google.com: domain of linux-pm-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=oUjbWw2j; spf=pass (google.com: domain of linux-pm-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725881AbgKSHi0 (ORCPT + 8 others); Thu, 19 Nov 2020 02:38:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56876 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725648AbgKSHiZ (ORCPT ); Thu, 19 Nov 2020 02:38:25 -0500 Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D9D4C0613CF for ; Wed, 18 Nov 2020 23:38:25 -0800 (PST) Received: by mail-pg1-x544.google.com with SMTP id t21so3464244pgl.3 for ; Wed, 18 Nov 2020 23:38:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oHrMQUbqDwzagP4Y2oPS+XW/6aunUOQ8w5HkGn8duRY=; b=oUjbWw2jWK7s8CbG+dI6Vb3FSb781vdJfwgeSbaTKtRlOu3FgcCjvfWACtGf0Ig0bv 6hw9N/4aSSmPjWVYLW+LKHFCV1EF/r34gP/FzYODhBQTO2zRcv6oOvY8pq2WHYqa96js dNsXgB69ecU6oug+UWWJ8LxweXgtRfWMw4w4JXjEbSNxX4bbr2WO/r9Zrr72H09qJCWZ L4mtVyBrsoe5V3TNmeiAfCGHceqls+7QXfvXAAfBM0kW2WsqWT4OS/HblYu/Eyyc618e lcunYNy/PSjuK6kPSPp7Xa4+eB2AEwDsQ/kGYJuQ/UcSDccsBjNVmMfmzZch5naRCEoK 7gEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oHrMQUbqDwzagP4Y2oPS+XW/6aunUOQ8w5HkGn8duRY=; b=FufMFJtUQ0/PxFKT6gvya0hl3Q6QNMlG9d3isKNzZFdqJLiiq2tBnmiURcGoeFSbOO PPmYsW7uszESrjmlpsjIiVSCcIfTxw1KLABeF3g4gY29m1lKvh1p01r6EiRytlneMXiY 6o8DCP6pLAABEhr9dMQ+ISFetwnjavXL1fPkw5Vq7NeF3Ctu0x6crUb5GIAzDUEthoV4 jZUpXQbYBA3I1U3xqflPCN4sac8MrU8wL6gNEdAeojr5KqPei7pasNft+hcz97gKfdjJ cii2ptZzy0IGbT7h0BQ6HeskgeFnyKBqrA7wUyDIA60P2C1jlpIeBTyRVHy0iNPP8chL jWKA== X-Gm-Message-State: AOAM5337hxrCI3g7+mE7hD7RdzvPV121XTi2hkvtkPv9wqwrbgZw2RRj VcmOIpYinWpxOQ3mQmOrC2JE5A== X-Received: by 2002:aa7:8481:0:b029:18b:f647:45f7 with SMTP id u1-20020aa784810000b029018bf64745f7mr8008662pfn.58.1605771504870; Wed, 18 Nov 2020 23:38:24 -0800 (PST) Received: from localhost ([122.172.12.172]) by smtp.gmail.com with ESMTPSA id kb12sm5157016pjb.2.2020.11.18.23.38.23 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 18 Nov 2020 23:38:24 -0800 (PST) From: Viresh Kumar To: Ingo Molnar , Peter Zijlstra , Vincent Guittot , Amit Daniel Kachhap , Daniel Lezcano , Viresh Kumar , Javi Merino , Zhang Rui , Amit Kucheria Cc: linux-kernel@vger.kernel.org, Quentin Perret , Lukasz Luba , linux-pm@vger.kernel.org Subject: [PATCH V3 2/2] thermal: cpufreq_cooling: Reuse sched_cpu_util() for SMP platforms Date: Thu, 19 Nov 2020 13:08:08 +0530 Message-Id: <1fa9994395764ba19cfe6240d8b3c1ce390e8f82.1605770951.git.viresh.kumar@linaro.org> X-Mailer: git-send-email 2.25.0.rc1.19.g042ed3e048af In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Several parts of the kernel are already using the effective CPU utilization (as seen by the scheduler) to get the current load on the CPU, do the same here instead of depending on the idle time of the CPU, which isn't that accurate comparatively. This is also the right thing to do as it makes the cpufreq governor (schedutil) align better with the cpufreq_cooling driver, as the power requested by cpufreq_cooling governor will exactly match the next frequency requested by the schedutil governor since they are both using the same metric to calculate load. Note that, this (and CPU frequency scaling in general) doesn't work that well with idle injection as that is done from rt threads and is counted as load while it tries to do quite the opposite. That should be solved separately though. This was tested on ARM Hikey6220 platform with hackbench, sysbench and schbench. None of them showed any regression or significant improvements. Schbench is the most important ones out of these as it creates the scenario where the utilization numbers provide a better estimate of the future. Scenario 1: The CPUs were mostly idle in the previous polling window of the IPA governor as the tasks were sleeping and here are the details from traces (load is in %): Old: thermal_power_cpu_get_power: cpus=00000000,000000ff freq=1200000 total_load=203 load={{0x35,0x1,0x0,0x31,0x0,0x0,0x64,0x0}} dynamic_power=1339 New: thermal_power_cpu_get_power: cpus=00000000,000000ff freq=1200000 total_load=600 load={{0x60,0x46,0x45,0x45,0x48,0x3b,0x61,0x44}} dynamic_power=3960 Here, the "Old" line gives the load and requested_power (dynamic_power here) numbers calculated using the idle time based implementation, while "New" is based on the CPU utilization from scheduler. As can be clearly seen, the load and requested_power numbers are simply incorrect in the idle time based approach and the numbers collected from CPU's utilization are much closer to the reality. Scenario 2: The CPUs were busy in the previous polling window of the IPA governor: Old: thermal_power_cpu_get_power: cpus=00000000,000000ff freq=1200000 total_load=800 load={{0x64,0x64,0x64,0x64,0x64,0x64,0x64,0x64}} dynamic_power=5280 New: thermal_power_cpu_get_power: cpus=00000000,000000ff freq=1200000 total_load=708 load={{0x4d,0x5c,0x5c,0x5b,0x5c,0x5c,0x51,0x5b}} dynamic_power=4672 As can be seen, the idle time based load is 100% for all the CPUs as it took only the last window into account, but in reality the CPUs aren't that loaded as shown by the utilization numbers. Signed-off-by: Viresh Kumar --- drivers/thermal/cpufreq_cooling.c | 68 ++++++++++++++++++++++++------- 1 file changed, 54 insertions(+), 14 deletions(-) -- 2.25.0.rc1.19.g042ed3e048af Reviewed-by: Lukasz Luba diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c index cc2959f22f01..a364a2fd84b1 100644 --- a/drivers/thermal/cpufreq_cooling.c +++ b/drivers/thermal/cpufreq_cooling.c @@ -76,7 +76,9 @@ struct cpufreq_cooling_device { struct em_perf_domain *em; struct cpufreq_policy *policy; struct list_head node; +#ifndef CONFIG_SMP struct time_in_idle *idle_time; +#endif struct freq_qos_request qos_req; }; @@ -132,14 +134,35 @@ static u32 cpu_power_to_freq(struct cpufreq_cooling_device *cpufreq_cdev, } /** - * get_load() - get load for a cpu since last updated - * @cpufreq_cdev: &struct cpufreq_cooling_device for this cpu - * @cpu: cpu number - * @cpu_idx: index of the cpu in time_in_idle* + * get_load() - get load for a cpu + * @cpufreq_cdev: struct cpufreq_cooling_device for the cpu + * @cpu: cpu number + * @cpu_idx: index of the cpu in time_in_idle array * * Return: The average load of cpu @cpu in percentage since this * function was last called. */ +#ifdef CONFIG_SMP +static u32 get_load(struct cpufreq_cooling_device *cpufreq_cdev, int cpu, + int cpu_idx) +{ + unsigned long max = arch_scale_cpu_capacity(cpu); + unsigned long util; + + util = sched_cpu_util(cpu, ENERGY_UTIL, max); + return (util * 100) / max; +} + +static inline int allocate_idle_time(struct cpufreq_cooling_device *cpufreq_cdev) +{ + return 0; +} + +static inline void free_idle_time(struct cpufreq_cooling_device *cpufreq_cdev) +{ +} + +#else /* !CONFIG_SMP */ static u32 get_load(struct cpufreq_cooling_device *cpufreq_cdev, int cpu, int cpu_idx) { @@ -162,6 +185,26 @@ static u32 get_load(struct cpufreq_cooling_device *cpufreq_cdev, int cpu, return load; } +static int allocate_idle_time(struct cpufreq_cooling_device *cpufreq_cdev) +{ + unsigned int num_cpus = cpumask_weight(cpufreq_cdev->policy->related_cpus); + + cpufreq_cdev->idle_time = kcalloc(num_cpus, + sizeof(*cpufreq_cdev->idle_time), + GFP_KERNEL); + if (!cpufreq_cdev->idle_time) + return -ENOMEM; + + return 0; +} + +static void free_idle_time(struct cpufreq_cooling_device *cpufreq_cdev) +{ + kfree(cpufreq_cdev->idle_time); + cpufreq_cdev->idle_time = NULL; +} +#endif /* CONFIG_SMP */ + /** * get_dynamic_power() - calculate the dynamic power * @cpufreq_cdev: &cpufreq_cooling_device for this cdev @@ -487,7 +530,7 @@ __cpufreq_cooling_register(struct device_node *np, struct thermal_cooling_device *cdev; struct cpufreq_cooling_device *cpufreq_cdev; char dev_name[THERMAL_NAME_LENGTH]; - unsigned int i, num_cpus; + unsigned int i; struct device *dev; int ret; struct thermal_cooling_device_ops *cooling_ops; @@ -498,7 +541,6 @@ __cpufreq_cooling_register(struct device_node *np, return ERR_PTR(-ENODEV); } - if (IS_ERR_OR_NULL(policy)) { pr_err("%s: cpufreq policy isn't valid: %p\n", __func__, policy); return ERR_PTR(-EINVAL); @@ -516,12 +558,10 @@ __cpufreq_cooling_register(struct device_node *np, return ERR_PTR(-ENOMEM); cpufreq_cdev->policy = policy; - num_cpus = cpumask_weight(policy->related_cpus); - cpufreq_cdev->idle_time = kcalloc(num_cpus, - sizeof(*cpufreq_cdev->idle_time), - GFP_KERNEL); - if (!cpufreq_cdev->idle_time) { - cdev = ERR_PTR(-ENOMEM); + + ret = allocate_idle_time(cpufreq_cdev); + if (ret) { + cdev = ERR_PTR(ret); goto free_cdev; } @@ -581,7 +621,7 @@ __cpufreq_cooling_register(struct device_node *np, remove_ida: ida_simple_remove(&cpufreq_ida, cpufreq_cdev->id); free_idle_time: - kfree(cpufreq_cdev->idle_time); + free_idle_time(cpufreq_cdev); free_cdev: kfree(cpufreq_cdev); return cdev; @@ -674,7 +714,7 @@ void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev) thermal_cooling_device_unregister(cdev); freq_qos_remove_request(&cpufreq_cdev->qos_req); ida_simple_remove(&cpufreq_ida, cpufreq_cdev->id); - kfree(cpufreq_cdev->idle_time); + free_idle_time(cpufreq_cdev); kfree(cpufreq_cdev); } EXPORT_SYMBOL_GPL(cpufreq_cooling_unregister);