From patchwork Fri Dec 8 00:23:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qais Yousef X-Patchwork-Id: 751990 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=layalina-io.20230601.gappssmtp.com header.i=@layalina-io.20230601.gappssmtp.com header.b="xj2HujGn" Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com [IPv6:2a00:1450:4864:20::334]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 20AE11718 for ; Thu, 7 Dec 2023 16:24:05 -0800 (PST) Received: by mail-wm1-x334.google.com with SMTP id 5b1f17b1804b1-40c0f3a7717so17661855e9.1 for ; Thu, 07 Dec 2023 16:24:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=layalina-io.20230601.gappssmtp.com; s=20230601; t=1701995043; x=1702599843; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RwtZB8Hg/lOBqKmz0+dSFpJe0u9fDzGLjlhIcgb+tpk=; b=xj2HujGnKjGY973qXjrALFveNqg4JtB5WrFOCWE7ceaHSrGZv+iId/X7CpuyCnwoqW yiBzfntjMJqv8Z0/8oCviPo2TW0ubACokq78I+kvB1W4RNbGgpkGn3OVsYkvwjCdiXAh L1GQZIjb+6tTPNGF+/ZbLDAufX9G+l1R/HITvIrdkaKd0/aEiJbU7o32TrdscPQqKbMf /i/EfxPDBUmPRUbFH3c5KEl2oVepagtZfN4zqv9MB87Ua991MAz32nq6i+uxF/1ivMHK QHDTzgFLekdL6A1Cv5oBxp5ewIVpa+aqOmtc265EGi8HAwBdoyzEljTWoKjbjfZ4//Bc NNVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701995043; x=1702599843; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RwtZB8Hg/lOBqKmz0+dSFpJe0u9fDzGLjlhIcgb+tpk=; b=MsMLcVLTGlkVZPBn+tu06oP3ioT7jVnl2BmBztzE6/OYNEctVkzNY9dMB0Gh3edoUy Y8itmaVQvrsv57/skFXXcfP6DJp5XGjzU76JJY0DgZQnAXKIvQdLXt47OtV8mfGZrNeq 3E2oTi/KgLb97wWM/7UYJ0m5HYQUe7lsf9kOiDqTLeV/KIokcpYX4AHMAHXu6f39YvZL nJYR+Re608MgaGxeesUdKQjzckd75AD94DK6G+iVXys+HMeyhIoyuNzS2IGxjlCRRvmU mZaKJfBvre9w7PtuSIdpEMR7fQwUU8YWSltojaWdV05Wne4vbrQ66ulzY0svSyamlRtf ZJcA== X-Gm-Message-State: AOJu0Yzdyiw/d5akvOGJGtabW5pc/XA3uNGtzhyKPU4UsSbHA9jNBG53 XA5lFOHTqMr67ULcDSLvx2GTdw== X-Google-Smtp-Source: AGHT+IGVtKxjAQHmSHnL7qQgtAYyHt3gL7QtYpXDZBL7rGC1MYgIoikcqJ3lLQPOfBoqfLdZumfpWg== X-Received: by 2002:a05:600c:4a12:b0:40b:5e59:f70d with SMTP id c18-20020a05600c4a1200b0040b5e59f70dmr989742wmp.127.1701995043482; Thu, 07 Dec 2023 16:24:03 -0800 (PST) Received: from airbuntu.. (host109-153-232-45.range109-153.btcentralplus.com. [109.153.232.45]) by smtp.gmail.com with ESMTPSA id u17-20020a05600c19d100b0040c1c269264sm3339653wmq.40.2023.12.07.16.24.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Dec 2023 16:24:03 -0800 (PST) From: Qais Yousef To: Ingo Molnar , Peter Zijlstra , "Rafael J. Wysocki" , Viresh Kumar , Vincent Guittot , Dietmar Eggemann Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Lukasz Luba , Wei Wang , Rick Yiu , Chung-Kai Mei , Qais Yousef Subject: [PATCH v2 1/8] cpufreq: Change default transition delay to 2ms Date: Fri, 8 Dec 2023 00:23:35 +0000 Message-Id: <20231208002342.367117-2-qyousef@layalina.io> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231208002342.367117-1-qyousef@layalina.io> References: <20231208002342.367117-1-qyousef@layalina.io> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 10ms is too high for today's hardware, even low end ones. This default end up being used a lot on Arm machines at least. Pine64, mac mini and pixel 6 all end up with 10ms rate_limit_us when using schedutil, and it's too high for all of them. Change the default to 2ms which should be 'pessimistic' enough for worst case scenario, but not too high for platforms with fast DVFS hardware. Signed-off-by: Qais Yousef (Google) --- drivers/cpufreq/cpufreq.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 934d35f570b7..9875284ca6e4 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -582,11 +582,11 @@ unsigned int cpufreq_policy_transition_delay_us(struct cpufreq_policy *policy) * for platforms where transition_latency is in milliseconds, it * ends up giving unrealistic values. * - * Cap the default transition delay to 10 ms, which seems to be + * Cap the default transition delay to 2 ms, which seems to be * a reasonable amount of time after which we should reevaluate * the frequency. */ - return min(latency * LATENCY_MULTIPLIER, (unsigned int)10000); + return min(latency * LATENCY_MULTIPLIER, (unsigned int)(2*MSEC_PER_SEC)); } return LATENCY_MULTIPLIER; From patchwork Fri Dec 8 00:23:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qais Yousef X-Patchwork-Id: 752425 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=layalina-io.20230601.gappssmtp.com header.i=@layalina-io.20230601.gappssmtp.com header.b="kEcXB5ug" Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com [IPv6:2a00:1450:4864:20::334]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E0FB1716 for ; Thu, 7 Dec 2023 16:24:07 -0800 (PST) Received: by mail-wm1-x334.google.com with SMTP id 5b1f17b1804b1-40c1e3ea2f2so15518635e9.2 for ; Thu, 07 Dec 2023 16:24:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=layalina-io.20230601.gappssmtp.com; s=20230601; t=1701995045; x=1702599845; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1MICklUThOT75YrGOtGs5vvoxKkTP4eSHGLbFICqC1g=; b=kEcXB5ugh4fioltkFcR/kJAvKoJMyioY+Nr+Uga78TYQg9k6s7JKXZZt8wjGA+2l/o 2SRx2xqrFr+dSRxVTqKSFgYVLH9wnwNezmBXLp41Qtqym7A7Ik7OuVAq4zoPpQv/s3me e87F/mbpldoEhv/kqCeb3mPbgqT+Mlzj+5/kL3Qw6TQ6gxLD4vgpqehYHda9fvukve+d E2O0X6GDWYPfQi++hNwXb/LfiFsICSyKSTpu0yqBSeS8h0zefRTOOGmUeBe8lEMz8nPB thWMWg03F5sRlO6h4SVWzYA3502otvgIeMchZiTjNHDEfeVL3fpKnqjZEqMWuKj1XBAn wZ9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701995045; x=1702599845; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1MICklUThOT75YrGOtGs5vvoxKkTP4eSHGLbFICqC1g=; b=bkHNSPZfQawKgR+P3jM1j+Fi81EyzohFx3oK7uUCl1YuC1X/au0XHxP5doyTf8gacA IevAMcK4FEDCinhQZcnJtUR2TWDQdAuiX9hcnhY6aTuTFf4L8+KOftwv+Rf/Jv/ETuOW qVmcgtd45uW5LwywWAEbl5ekuQFnlFNIboYvAcke3LAZVzaZIUoN7S/PN6xdqTa7/5Ac WIQZ2+ECcWEZWa0nczNdWCNSuBE2D8xDDGlb8ZvPiV1scy5rH+MpLf4nSf4eVc8vv0pI DiW8P0vUrnuhy2UMkxu0Sc6PZsxBJr3MVTh0C35BkVJPvL4I4RjStfOgUjOWHT5YQoXm EyNA== X-Gm-Message-State: AOJu0YwTXfHaVQY/D7HyMY5sz/hw4M5RPfwLLV48/H88c4WPPM4WkF4F etkwRu8Ygqi+ehqjIT2xGn4x6g== X-Google-Smtp-Source: AGHT+IF6n9vGF+wKVNWmclAiRMBztZoqnbj8D/4FxMtbCN6r+z66nQHUenjbCO2tFAE1xQOoGOdHMA== X-Received: by 2002:a05:600c:3d0b:b0:40c:314c:803e with SMTP id bh11-20020a05600c3d0b00b0040c314c803emr435922wmb.106.1701995045591; Thu, 07 Dec 2023 16:24:05 -0800 (PST) Received: from airbuntu.. (host109-153-232-45.range109-153.btcentralplus.com. [109.153.232.45]) by smtp.gmail.com with ESMTPSA id u17-20020a05600c19d100b0040c1c269264sm3339653wmq.40.2023.12.07.16.24.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Dec 2023 16:24:05 -0800 (PST) From: Qais Yousef To: Ingo Molnar , Peter Zijlstra , "Rafael J. Wysocki" , Viresh Kumar , Vincent Guittot , Dietmar Eggemann Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Lukasz Luba , Wei Wang , Rick Yiu , Chung-Kai Mei , Qais Yousef Subject: [PATCH v2 2/8] sched: cpufreq: Rename map_util_perf to apply_dvfs_headroom Date: Fri, 8 Dec 2023 00:23:36 +0000 Message-Id: <20231208002342.367117-3-qyousef@layalina.io> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231208002342.367117-1-qyousef@layalina.io> References: <20231208002342.367117-1-qyousef@layalina.io> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 We are providing headroom for the utilization to grow until the next decision point to pick the next frequency. Give the function a better name and give it some documentation. It is not really mapping anything. Also move it to sched.h. This function relies on updating util signal appropriately to give a headroom to grow. This is more of a scheduler functionality than cpufreq. Move it to sched.h where all the other util handling code belongs. Signed-off-by: Qais Yousef (Google) --- include/linux/sched/cpufreq.h | 5 ----- kernel/sched/cpufreq_schedutil.c | 2 +- kernel/sched/sched.h | 17 +++++++++++++++++ 3 files changed, 18 insertions(+), 6 deletions(-) diff --git a/include/linux/sched/cpufreq.h b/include/linux/sched/cpufreq.h index bdd31ab93bc5..d01755d3142f 100644 --- a/include/linux/sched/cpufreq.h +++ b/include/linux/sched/cpufreq.h @@ -28,11 +28,6 @@ static inline unsigned long map_util_freq(unsigned long util, { return freq * util / cap; } - -static inline unsigned long map_util_perf(unsigned long util) -{ - return util + (util >> 2); -} #endif /* CONFIG_CPU_FREQ */ #endif /* _LINUX_SCHED_CPUFREQ_H */ diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 4ee8ad70be99..79c3b96dc02c 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -157,7 +157,7 @@ unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual, unsigned long max) { /* Add dvfs headroom to actual utilization */ - actual = map_util_perf(actual); + actual = apply_dvfs_headroom(actual); /* Actually we don't need to target the max performance */ if (actual < max) max = actual; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index e58a54bda77d..0da3425200b1 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3002,6 +3002,23 @@ unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual, unsigned long min, unsigned long max); +/* + * DVFS decision are made at discrete points. If CPU stays busy, the util will + * continue to grow, which means it could need to run at a higher frequency + * before the next decision point was reached. IOW, we can't follow the util as + * it grows immediately, but there's a delay before we issue a request to go to + * higher frequency. The headroom caters for this delay so the system continues + * to run at adequate performance point. + * + * This function provides enough headroom to provide adequate performance + * assuming the CPU continues to be busy. + * + * At the moment it is a constant multiplication with 1.25. + */ +static inline unsigned long apply_dvfs_headroom(unsigned long util) +{ + return util + (util >> 2); +} /* * Verify the fitness of task @p to run on @cpu taking into account the From patchwork Fri Dec 8 00:23:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qais Yousef X-Patchwork-Id: 751989 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=layalina-io.20230601.gappssmtp.com header.i=@layalina-io.20230601.gappssmtp.com header.b="x+BhzZ6V" Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com [IPv6:2a00:1450:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35ADBC6 for ; Thu, 7 Dec 2023 16:24:09 -0800 (PST) Received: by mail-wm1-x329.google.com with SMTP id 5b1f17b1804b1-40c32df9174so1799675e9.3 for ; Thu, 07 Dec 2023 16:24:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=layalina-io.20230601.gappssmtp.com; s=20230601; t=1701995047; x=1702599847; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4hmu4GR1Tf24zDfNQg/5SjNdB/QFzSeUZxk8HyYC52U=; b=x+BhzZ6VtGPBQFltDUsIdkVfWUh1AG8Qdn8SLD2E/8UQ7ee58SP79CSpA/Pv4BVVh5 rpZvTDUQo5GjiQ2GTc9lJx5hTiIOHuK9D7VjixxgTDFBNdLi3vNzmVXD4f1Yjng5mE98 Q4POqOxubMOO8smHCX2dGZYTCZVz3qi3AUBvrkjFygVZFpIkVbgaQ92gJHXHVZpx6uyA xPAzbfpgOK6aXQaVxbd1hIqL4mYXL9wMiNjA2zLTq/60dthzOj+qTlsE3lHGaX2zTyCI hzHYUM7TqpGUCctXCu5IdF+ugivfruAgOEwQgOIQrFrSIGxQ5yeGWEaFDvMHqwVML1sW GjVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701995047; x=1702599847; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4hmu4GR1Tf24zDfNQg/5SjNdB/QFzSeUZxk8HyYC52U=; b=UIbUyCFrHsSt/zZG4/IYe7RhiUyMnTmY/zCxVs19BzpxsyHVob528Q5qCto3CF3iea G4cAyH7B+UOH0EisYa91mOzwqAyqtRA5pxuY4Lj2aKIQP52Pr7db/dJi0Bgct5rLqc36 OJIl5MWTgQAa9kyppRR6C//owZ0T3a2dhPmbOzlkOKLb7T9DjplLMyqeXK1ItEtxMbjt 15oJX+cX03/1ycOq2YZ9UP5z2PUFPJBAzTwiBl+/MEZsMwimSOGWWGZ5udx2EtOqpagB 7nlJarrVTLzOtLapAQITNBUDZWo++T64hpssaMY3Z+11pAOjlQI0Jfb2LqDvCbnMChVV a2gw== X-Gm-Message-State: AOJu0YyXRkbkiVEtHIugw3/hYGBhLUrg2C3/5tdFGcK/6Az1oAySNwd3 ZlKM7Y4L8w3NP/QXUKmndJKxTQ== X-Google-Smtp-Source: AGHT+IEDkE6xFmQ45NVfGlGxefY7HBAhSTWDgMehJ17Ql0RmMpqkVzJDaZNH0S70STgK37Gtbrc9Yg== X-Received: by 2002:a05:600c:a388:b0:40c:2c51:d78b with SMTP id hn8-20020a05600ca38800b0040c2c51d78bmr522596wmb.209.1701995047752; Thu, 07 Dec 2023 16:24:07 -0800 (PST) Received: from airbuntu.. (host109-153-232-45.range109-153.btcentralplus.com. [109.153.232.45]) by smtp.gmail.com with ESMTPSA id u17-20020a05600c19d100b0040c1c269264sm3339653wmq.40.2023.12.07.16.24.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Dec 2023 16:24:07 -0800 (PST) From: Qais Yousef To: Ingo Molnar , Peter Zijlstra , "Rafael J. Wysocki" , Viresh Kumar , Vincent Guittot , Dietmar Eggemann Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Lukasz Luba , Wei Wang , Rick Yiu , Chung-Kai Mei , Qais Yousef Subject: [PATCH v2 3/8] sched/pelt: Add a new function to approximate the future util_avg value Date: Fri, 8 Dec 2023 00:23:37 +0000 Message-Id: <20231208002342.367117-4-qyousef@layalina.io> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231208002342.367117-1-qyousef@layalina.io> References: <20231208002342.367117-1-qyousef@layalina.io> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Given a util_avg value, the new function will return the future one given a runtime delta. This will be useful in later patches to help replace some magic margins with more deterministic behavior. Signed-off-by: Qais Yousef (Google) --- kernel/sched/pelt.c | 22 +++++++++++++++++++++- kernel/sched/sched.h | 2 ++ 2 files changed, 23 insertions(+), 1 deletion(-) diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c index 63b6cf898220..81555a8288be 100644 --- a/kernel/sched/pelt.c +++ b/kernel/sched/pelt.c @@ -466,4 +466,24 @@ int update_irq_load_avg(struct rq *rq, u64 running) return ret; } -#endif +#endif /* CONFIG_HAVE_SCHED_AVG_IRQ */ + +/* + * Approximate the new util_avg value assuming an entity has continued to run + * for @delta us. + */ +unsigned long approximate_util_avg(unsigned long util, u64 delta) +{ + struct sched_avg sa = { + .util_sum = util * PELT_MIN_DIVIDER, + .util_avg = util, + }; + + if (unlikely(!delta)) + return util; + + accumulate_sum(delta, &sa, 1, 0, 1); + ___update_load_avg(&sa, 0); + + return sa.util_avg; +} diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 0da3425200b1..7e5a86a376f8 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3002,6 +3002,8 @@ unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual, unsigned long min, unsigned long max); +unsigned long approximate_util_avg(unsigned long util, u64 delta); + /* * DVFS decision are made at discrete points. If CPU stays busy, the util will * continue to grow, which means it could need to run at a higher frequency From patchwork Fri Dec 8 00:23:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qais Yousef X-Patchwork-Id: 752424 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=layalina-io.20230601.gappssmtp.com header.i=@layalina-io.20230601.gappssmtp.com header.b="cdX5NWyW" Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com [IPv6:2a00:1450:4864:20::32c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0CAED1729 for ; Thu, 7 Dec 2023 16:24:13 -0800 (PST) Received: by mail-wm1-x32c.google.com with SMTP id 5b1f17b1804b1-40c2718a768so15553765e9.0 for ; Thu, 07 Dec 2023 16:24:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=layalina-io.20230601.gappssmtp.com; s=20230601; t=1701995051; x=1702599851; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JFcvWyGzy7I7g+xwi5n944IlgZ7CY3p5798f0wmNhcc=; b=cdX5NWyWCBwXaoh8OWBzi6gu+rNuV2bBH/2eVclyphUVCJIi3HYqdO3+95cxomldmM S0Ly4YLHh5kVQSxqv7ZXyy7X/NurSo4a/NGpcPwKNB6xvFOM/8v5IoA7TUsiw7Uo4/bt NFjNhGzQl28Aa2NZ7ZDmfynSzxWVTeBnN7GRIvNF0gHY1Jc/uM1jQzd5fNmqkltJ3NRc OS/5OwLvc+0dExB8HOUUkeQ4XHRofF2IOBCUvyqO8DGvztAYpK5ssac1mMHRohib93Fy 7eVDJSYd4XabZkagv2hicmktQDoHYZ488qhIAHYmSlIfSmiWTVUYNLQZC0RH8S+I2zQP tAkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701995051; x=1702599851; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JFcvWyGzy7I7g+xwi5n944IlgZ7CY3p5798f0wmNhcc=; b=DURv7bwjP+TtBlsqzQA+SgvzxTnC3tYK67w88jrHt8MDmX+bNykj68IcSHyZZt4Ooy SMUPUiHd+W2fL+BdMT3d5Czut3z6jxES5iOa/uu7JD7egQ7qPd9L7ddb6WzvCCN/lFZW gMh1hOfZuo9uJ2HYJfgO11Jvjp9Bmk/5OL/jZQY9b6p02hM9EVXRkl6zdEacDvJBoFVg AvoIXJKUd7dD/P+Rb3f49GmyJzjWnAVT7qw9u2fOFuEut9UTQ5e4Sh2ZpEdg4E2FuqvL exr/miTH6rI9MNmfty8OpViyP1psZxmgyeygCPotvFfDNjTlEbZrFbKRpsrGFj49dGVJ hWfQ== X-Gm-Message-State: AOJu0YxEZRk4pF4BOxq5qmi2SjGObZYIUIConLECUIXWD67d8ur04/9z 9vj0vgWh1530+BxBIM6KxL56uw== X-Google-Smtp-Source: AGHT+IFNv/bJ3z7++RWJr3sps726MqohP1NGWMmNXo4tcWhp9NOoexZrvOdWJYl26EZFZZEDpZZ4qw== X-Received: by 2002:a05:600c:3c86:b0:40c:7ef:2d2 with SMTP id bg6-20020a05600c3c8600b0040c07ef02d2mr2224939wmb.8.1701995051136; Thu, 07 Dec 2023 16:24:11 -0800 (PST) Received: from airbuntu.. (host109-153-232-45.range109-153.btcentralplus.com. [109.153.232.45]) by smtp.gmail.com with ESMTPSA id u17-20020a05600c19d100b0040c1c269264sm3339653wmq.40.2023.12.07.16.24.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Dec 2023 16:24:10 -0800 (PST) From: Qais Yousef To: Ingo Molnar , Peter Zijlstra , "Rafael J. Wysocki" , Viresh Kumar , Vincent Guittot , Dietmar Eggemann Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Lukasz Luba , Wei Wang , Rick Yiu , Chung-Kai Mei , Qais Yousef Subject: [PATCH v2 4/8] sched/pelt: Add a new function to approximate runtime to reach given util Date: Fri, 8 Dec 2023 00:23:38 +0000 Message-Id: <20231208002342.367117-5-qyousef@layalina.io> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231208002342.367117-1-qyousef@layalina.io> References: <20231208002342.367117-1-qyousef@layalina.io> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 It is basically the ramp-up time from 0 to a given value. Will be used later to implement new tunable to control response time for schedutil. Signed-off-by: Qais Yousef (Google) --- kernel/sched/pelt.c | 21 +++++++++++++++++++++ kernel/sched/sched.h | 1 + 2 files changed, 22 insertions(+) diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c index 81555a8288be..00a1b9c1bf16 100644 --- a/kernel/sched/pelt.c +++ b/kernel/sched/pelt.c @@ -487,3 +487,24 @@ unsigned long approximate_util_avg(unsigned long util, u64 delta) return sa.util_avg; } + +/* + * Approximate the required amount of runtime in ms required to reach @util. + */ +u64 approximate_runtime(unsigned long util) +{ + struct sched_avg sa = {}; + u64 delta = 1024; // period = 1024 = ~1ms + u64 runtime = 0; + + if (unlikely(!util)) + return runtime; + + while (sa.util_avg < util) { + accumulate_sum(delta, &sa, 1, 0, 1); + ___update_load_avg(&sa, 0); + runtime++; + } + + return runtime; +} diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 7e5a86a376f8..2de64f59853c 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3003,6 +3003,7 @@ unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual, unsigned long max); unsigned long approximate_util_avg(unsigned long util, u64 delta); +u64 approximate_runtime(unsigned long util); /* * DVFS decision are made at discrete points. If CPU stays busy, the util will From patchwork Fri Dec 8 00:23:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qais Yousef X-Patchwork-Id: 751988 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=layalina-io.20230601.gappssmtp.com header.i=@layalina-io.20230601.gappssmtp.com header.b="usHAvpdI" Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com [IPv6:2a00:1450:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2DC201981 for ; Thu, 7 Dec 2023 16:24:15 -0800 (PST) Received: by mail-wm1-x329.google.com with SMTP id 5b1f17b1804b1-40c29f7b068so12525415e9.0 for ; Thu, 07 Dec 2023 16:24:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=layalina-io.20230601.gappssmtp.com; s=20230601; t=1701995053; x=1702599853; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mRq0Syt1SvsKafd3cs6Wfp7cr+i7CiNOFmpcV8u9otI=; b=usHAvpdI+8BCDpXTPlDZj94N2uiO7C6T7X3m780XxcxIEBe7thVy3r/tY18K2eIBkw 0kLzajxwNDRDUV60iOQP6QsW3my9+elNyPbzebQ+5CdPZ7QBhfUjs7nP2gXL2fmvtXQS kGQkNIYhbAk4/9MGQBcNUZ6MLVQw0oryFjW9rR/d+7SSgaKa5mND9+zaOmAkGIHMTx84 vzBHY+nAMRBq6k1whTJtua0IYtj7ogL9hcOU/ERm55T+L3TZ1bjgqxKJGeQW0ej/eAcr p/KVXJ867hybCpZr8NZ80qJuam2T68rf6wUCktNVFSvYG09KfcLPJtSMhkgzhlgT9kIT Qnyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701995053; x=1702599853; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mRq0Syt1SvsKafd3cs6Wfp7cr+i7CiNOFmpcV8u9otI=; b=ErEI15+d1bO8sTOr/thm+KA73uU9m3zL54GXiRseOcxCWI7P//QhqZ3cAbsmLhSj3J NKJ12uOUzhBTQAsVjgRq2ly9IKCdMASRT7RrHdQ/x2CPOs0lYpJc6yOO2zgbRZiGtBJn BCgPgk2l1besPPxUwVEDtpTRpkjmmnVLYyAxuKwlt+ZsLyzIlQUuEzQao4Ybx+13RyNL kDrH7M/x4K0it/SB+pcOLfhoZ4Pz9f2SoNim27hTVGErOXpa8smWgbaTj1wxlt7snH0K WUrN+8lELBmz/XlJD8DG2Xcd9dZQoiuHPIolJKD+p31yKYzU71meUGok2Di22sJJGLc5 rwdg== X-Gm-Message-State: AOJu0Yy3VM08S+soO8EcsufdFkhhZv3qdKzETGBC/OCg5BdDgw/rPylh FlLIyLCBpO5078kGje4qkM9gh3nHPcoTWyxC35I= X-Google-Smtp-Source: AGHT+IGzt9IqJG3ZpXpJIZHhOEc8F+mqRgQdm8BQLp3/peB0eQhUaZSO2ows8w6zmPv1lRO1zF36+g== X-Received: by 2002:a05:600c:45d2:b0:40b:32fa:d8a3 with SMTP id s18-20020a05600c45d200b0040b32fad8a3mr2019599wmo.18.1701995053212; Thu, 07 Dec 2023 16:24:13 -0800 (PST) Received: from airbuntu.. (host109-153-232-45.range109-153.btcentralplus.com. [109.153.232.45]) by smtp.gmail.com with ESMTPSA id u17-20020a05600c19d100b0040c1c269264sm3339653wmq.40.2023.12.07.16.24.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Dec 2023 16:24:12 -0800 (PST) From: Qais Yousef To: Ingo Molnar , Peter Zijlstra , "Rafael J. Wysocki" , Viresh Kumar , Vincent Guittot , Dietmar Eggemann Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Lukasz Luba , Wei Wang , Rick Yiu , Chung-Kai Mei , Qais Yousef Subject: [PATCH v2 5/8] sched/fair: Remove magic hardcoded margin in fits_capacity() Date: Fri, 8 Dec 2023 00:23:39 +0000 Message-Id: <20231208002342.367117-6-qyousef@layalina.io> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231208002342.367117-1-qyousef@layalina.io> References: <20231208002342.367117-1-qyousef@layalina.io> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Replace hardcoded margin value in fits_capacity() with better dynamic logic. 80% margin is a magic value that has served its purpose for now, but it no longer fits the variety of systems exist today. If a system is over powered specifically, this 80% will mean we leave a lot of capacity unused before we decide to upmigrate on HMP system. On some systems the little core are under powered and ability to migrate faster away from them is desired. The upmigration behavior should rely on the fact that a bad decision made will need load balance to kick in to perform misfit migration. And I think this is an adequate definition for what to consider as enough headroom to consider whether a util fits capacity or not. Use the new approximate_util_avg() function to predict the util if the task continues to run for TICK_US. If the value is not strictly less than the capacity, then it must not be placed there, ie considered misfit. Signed-off-by: Qais Yousef (Google) --- kernel/sched/fair.c | 21 ++++++++++++++++++--- 1 file changed, 18 insertions(+), 3 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bcea3d55d95d..b83448be3f79 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -101,16 +101,31 @@ int __weak arch_asym_cpu_priority(int cpu) } /* - * The margin used when comparing utilization with CPU capacity. + * The util will fit the capacity if it has enough headroom to grow within the + * next tick - which is when any load balancing activity happens to do the + * correction. * - * (default: ~20%) + * If util stays within the capacity before tick has elapsed, then it should be + * fine. If not, then a correction action must happen shortly after it starts + * running, hence we treat it as !fit. + * + * TODO: TICK is not actually accurate enough. balance_interval is the correct + * one to use as the next load balance doesn't not happen religiously at tick. + * Accessing balance_interval might be tricky and will require some refactoring + * first. */ -#define fits_capacity(cap, max) ((cap) * 1280 < (max) * 1024) +static inline bool fits_capacity(unsigned long util, unsigned long capacity) +{ + return approximate_util_avg(util, TICK_USEC) < capacity; +} /* * The margin used when comparing CPU capacities. * is 'cap1' noticeably greater than 'cap2' * + * TODO: use approximate_util_avg() to give something more quantifiable based + * on time? Like 1ms? + * * (default: ~5%) */ #define capacity_greater(cap1, cap2) ((cap1) * 1024 > (cap2) * 1078) From patchwork Fri Dec 8 00:23:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qais Yousef X-Patchwork-Id: 752423 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=layalina-io.20230601.gappssmtp.com header.i=@layalina-io.20230601.gappssmtp.com header.b="OMNtVLE8" Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com [IPv6:2a00:1450:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 56E96D5B for ; Thu, 7 Dec 2023 16:24:17 -0800 (PST) Received: by mail-wr1-x42c.google.com with SMTP id ffacd0b85a97d-33334480eb4so1888746f8f.0 for ; Thu, 07 Dec 2023 16:24:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=layalina-io.20230601.gappssmtp.com; s=20230601; t=1701995056; x=1702599856; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0nwlq0zhFu5FAStt4KBjg7mmh2j1Bzk1RkbICiq0xto=; b=OMNtVLE8/L55t5RGpgoTuYD5hxcyHPgVChQ3+c9sHAS8QuIcAzROMrVPzL2nH5P74V O6EkctcoMVX8q1HOxPP1CqZZXp+gAQ5PwhBzs2T1AdK3yyGZ7DrplrlwwlCj7Tut+O/K kG/k03zv8NL9cxG1nylXz05v1lMjSlQ2YffIumgm5HAwedQf6AeQxlXyF9ZeHf9OcEk8 dSNToUI4qfS0/Co6X6FgQQidon1j2XdH86XESsCn0P/lePbMhxxstB+3bj0B8Ofcobww sehJjTlBxV8lOU7E0U0lqgcvAbfk7N0G7+CesMsVsj1aYwmCNI5BiN1D8Wq51s0GfIuQ GTaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701995056; x=1702599856; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0nwlq0zhFu5FAStt4KBjg7mmh2j1Bzk1RkbICiq0xto=; b=h/n8GTD55RhZ6W8gjtOMHHHxn2W61szGS/hB1ZzFMPpkMhEyiHXBMdBHq9Sj5AUeRl A/TCTvDlAhvTXcTz4xYaLaQAuauyrGtBcQ9oN1yYk5A0FqXGVxo9i9aa+tkafvZnTBld O9WJKizEFpSm8Xo6tkiEgAgpCT0KCURb4eYboIM678bpURvbWXiD3A4M1EZC9GnNDgPQ 2Th1T84wsoVF6NnxOyRFVJsu1SGP4HIqQohVvIoDEKd0MIqd6Z7GtvEVAgiRQgxephE8 3FZihJ3Xjl7wwS7UxqmRKbdJE5ySMZ0I/ovqk5Ec1nTBD56N6lR2grVtGhEvb5SZgg+X OyDQ== X-Gm-Message-State: AOJu0YyvWUHHPwhcGO0SXSxrZQi/PFSXRozfAVx+ewTv94NZPlvP3hVM P+WuErbfcZPJs7UC49EGs/kA5w== X-Google-Smtp-Source: AGHT+IGaBM/YAzJsink77cXmt66LEcA7F6asWDoic+NYkY33uXpybFXXPDoXowvaweViU1Heh3EQ4w== X-Received: by 2002:a05:600c:45cf:b0:401:bdd7:49ae with SMTP id s15-20020a05600c45cf00b00401bdd749aemr1943026wmo.18.1701995055118; Thu, 07 Dec 2023 16:24:15 -0800 (PST) Received: from airbuntu.. (host109-153-232-45.range109-153.btcentralplus.com. [109.153.232.45]) by smtp.gmail.com with ESMTPSA id u17-20020a05600c19d100b0040c1c269264sm3339653wmq.40.2023.12.07.16.24.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Dec 2023 16:24:14 -0800 (PST) From: Qais Yousef To: Ingo Molnar , Peter Zijlstra , "Rafael J. Wysocki" , Viresh Kumar , Vincent Guittot , Dietmar Eggemann Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Lukasz Luba , Wei Wang , Rick Yiu , Chung-Kai Mei , Qais Yousef Subject: [PATCH v2 6/8] sched: cpufreq: Remove magic 1.25 headroom from apply_dvfs_headroom() Date: Fri, 8 Dec 2023 00:23:40 +0000 Message-Id: <20231208002342.367117-7-qyousef@layalina.io> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231208002342.367117-1-qyousef@layalina.io> References: <20231208002342.367117-1-qyousef@layalina.io> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Replace 1.25 headroom in apply_dvfs_headroom() with better dynamic logic. Instead of the magical 1.25 headroom, use the new approximate_util_avg() to provide headroom based on the dvfs_update_delay; which is the period at which the cpufreq governor will send DVFS updates to the hardware. Add a new percpu dvfs_update_delay that can be cheaply accessed whenever apply_dvfs_headroom() is called. We expect cpufreq governors that rely on util to drive its DVFS logic/algorithm to populate these percpu variables. schedutil is the only such governor at the moment. The behavior of schedutil will change as the headroom will be less than 1.25 for most systems as the rate_limit_us is usually short. Signed-off-by: Qais Yousef (Google) --- kernel/sched/core.c | 1 + kernel/sched/cpufreq_schedutil.c | 13 +++++++++++-- kernel/sched/sched.h | 18 ++++++++++++++---- 3 files changed, 26 insertions(+), 6 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index db4be4921e7f..b4a1c8ea9e12 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -116,6 +116,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_update_nr_running_tp); EXPORT_TRACEPOINT_SYMBOL_GPL(sched_compute_energy_tp); DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); +DEFINE_PER_CPU_READ_MOSTLY(u64, dvfs_update_delay); #ifdef CONFIG_SCHED_DEBUG /* diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 79c3b96dc02c..1d4d6025c15f 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -157,7 +157,7 @@ unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual, unsigned long max) { /* Add dvfs headroom to actual utilization */ - actual = apply_dvfs_headroom(actual); + actual = apply_dvfs_headroom(actual, cpu); /* Actually we don't need to target the max performance */ if (actual < max) max = actual; @@ -535,15 +535,21 @@ rate_limit_us_store(struct gov_attr_set *attr_set, const char *buf, size_t count struct sugov_tunables *tunables = to_sugov_tunables(attr_set); struct sugov_policy *sg_policy; unsigned int rate_limit_us; + int cpu; if (kstrtouint(buf, 10, &rate_limit_us)) return -EINVAL; tunables->rate_limit_us = rate_limit_us; - list_for_each_entry(sg_policy, &attr_set->policy_list, tunables_hook) + list_for_each_entry(sg_policy, &attr_set->policy_list, tunables_hook) { + sg_policy->freq_update_delay_ns = rate_limit_us * NSEC_PER_USEC; + for_each_cpu(cpu, sg_policy->policy->cpus) + per_cpu(dvfs_update_delay, cpu) = rate_limit_us; + } + return count; } @@ -824,6 +830,9 @@ static int sugov_start(struct cpufreq_policy *policy) memset(sg_cpu, 0, sizeof(*sg_cpu)); sg_cpu->cpu = cpu; sg_cpu->sg_policy = sg_policy; + + per_cpu(dvfs_update_delay, cpu) = sg_policy->tunables->rate_limit_us; + cpufreq_add_update_util_hook(cpu, &sg_cpu->update_util, uu); } return 0; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 2de64f59853c..bbece0eb053a 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3005,6 +3005,15 @@ unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual, unsigned long approximate_util_avg(unsigned long util, u64 delta); u64 approximate_runtime(unsigned long util); +/* + * Any governor that relies on util signal to drive DVFS, must populate these + * percpu dvfs_update_delay variables. + * + * It should describe the rate/delay at which the governor sends DVFS freq + * update to the hardware in us. + */ +DECLARE_PER_CPU_READ_MOSTLY(u64, dvfs_update_delay); + /* * DVFS decision are made at discrete points. If CPU stays busy, the util will * continue to grow, which means it could need to run at a higher frequency @@ -3014,13 +3023,14 @@ u64 approximate_runtime(unsigned long util); * to run at adequate performance point. * * This function provides enough headroom to provide adequate performance - * assuming the CPU continues to be busy. + * assuming the CPU continues to be busy. This headroom is based on the + * dvfs_update_delay of the cpufreq governor. * - * At the moment it is a constant multiplication with 1.25. + * XXX: Should we provide headroom when the util is decaying? */ -static inline unsigned long apply_dvfs_headroom(unsigned long util) +static inline unsigned long apply_dvfs_headroom(unsigned long util, int cpu) { - return util + (util >> 2); + return approximate_util_avg(util, per_cpu(dvfs_update_delay, cpu)); } /* From patchwork Fri Dec 8 00:23:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qais Yousef X-Patchwork-Id: 751987 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=layalina-io.20230601.gappssmtp.com header.i=@layalina-io.20230601.gappssmtp.com header.b="ytQCuWqf" Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com [IPv6:2a00:1450:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8DD04199D for ; Thu, 7 Dec 2023 16:24:19 -0800 (PST) Received: by mail-wm1-x333.google.com with SMTP id 5b1f17b1804b1-40c2d50bfbfso4915705e9.0 for ; Thu, 07 Dec 2023 16:24:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=layalina-io.20230601.gappssmtp.com; s=20230601; t=1701995057; x=1702599857; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ubzGMtQt5HVPM7AOlBax8stZx2xSHeRHEpRUO663LjQ=; b=ytQCuWqfn/HW++/glLQlEYIIfb7zZvEc0wm67fi4eviyLHM2W/Be9Df4QpspGPjgpR p+ibl2cycN6gT9cKOSrKxSVmRLZIuUVMkklYIYSjmSmORvKijV9yR6ak2DuH01hD3Iqe s4ikXZImNcw9ze2dbcR/mFrJKLBa1POrrWvoK2z3l9oi5fQmWkxoQmvJUyiRTFhhgqbs 0Xfzp7HQ1M8eU2BnJ7WFnmWIoknYKz+CwVSki/SOSxrDKnRWipEMsLdIt75PDDDyRRcj vwnPiX7SKbD50f4IYRUKfbxBHUfNNDCVuSJhetjX3pz7JyuIJeP3OHfDoLdMhiNlxMG9 jMZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701995057; x=1702599857; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ubzGMtQt5HVPM7AOlBax8stZx2xSHeRHEpRUO663LjQ=; b=dhCLjlRebe2YTlc8IwrdMBRYwkIyiNIXpVAC6sL+HfN7V/fBZPTwmjvhbpECOVoXxo 6KSQ9wuKVvD+o63RhPmeKRX08676IUry5zjRayGPPfryt048HVbXf8E/R0KBa7Bt75M4 6XLd0KDTKTj53/ziDwA69pgfRLpqcIHHTaSjECneYXBmqCsfN00kOYoOTmhaDz89cHBi 8fqdcQ+lRx0J1Wo9DRb9l+4JnhTMNH6Hs8lrY0X2iaHlCRSgQPLCBIXvqrZJoCqJaKrB O7Zu9QClitJYeIfMo3CGFUA9N849IGsg8jGxgUB9vgzGROF2BQsqkXaMEXmUv+ZqUY7x mnRg== X-Gm-Message-State: AOJu0Yz1zoAC1J3TYyHGDKuC+Kq4xonTRyApx11ohRhdinawpBY/fuee evBQgjn4W3YABvwitbcrFjFzFQ== X-Google-Smtp-Source: AGHT+IHy+MqgBBDx5az8OQsHOLEELahOE1jEa8gSj75qsplNZD/2kfFi6zDqm9ryPOJBMCx8LWusGQ== X-Received: by 2002:a05:600c:6985:b0:40b:2a08:c45e with SMTP id fp5-20020a05600c698500b0040b2a08c45emr18370wmb.3.1701995057152; Thu, 07 Dec 2023 16:24:17 -0800 (PST) Received: from airbuntu.. (host109-153-232-45.range109-153.btcentralplus.com. [109.153.232.45]) by smtp.gmail.com with ESMTPSA id u17-20020a05600c19d100b0040c1c269264sm3339653wmq.40.2023.12.07.16.24.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Dec 2023 16:24:16 -0800 (PST) From: Qais Yousef To: Ingo Molnar , Peter Zijlstra , "Rafael J. Wysocki" , Viresh Kumar , Vincent Guittot , Dietmar Eggemann Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Lukasz Luba , Wei Wang , Rick Yiu , Chung-Kai Mei , Qais Yousef Subject: [PATCH v2 7/8] sched/schedutil: Add a new tunable to dictate response time Date: Fri, 8 Dec 2023 00:23:41 +0000 Message-Id: <20231208002342.367117-8-qyousef@layalina.io> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231208002342.367117-1-qyousef@layalina.io> References: <20231208002342.367117-1-qyousef@layalina.io> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The new tunable, response_time_ms, allow us to speed up or slow down the response time of the policy to meet the perf, power and thermal characteristic desired by the user/sysadmin. There's no single universal trade-off that we can apply for all systems even if they use the same SoC. The form factor of the system, the dominant use case, and in case of battery powered systems, the size of the battery and presence or absence of active cooling can play a big role on what would be best to use. The new tunable provides sensible defaults, but yet gives the power to control the response time to the user/sysadmin, if they wish to. This tunable is applied before we apply the DVFS headroom. The default behavior of applying 1.25 headroom can be re-instated easily now. But we continue to keep the min required headroom to overcome hardware limitation in its speed to change DVFS. And any additional headroom to speed things up must be applied by userspace to match their expectation for best perf/watt as it dictates a type of policy that will be better for some systems, but worse for others. There's a whitespace clean up included in sugov_start(). Signed-off-by: Qais Yousef (Google) --- Documentation/admin-guide/pm/cpufreq.rst | 17 +++- drivers/cpufreq/cpufreq.c | 4 +- include/linux/cpufreq.h | 3 + kernel/sched/cpufreq_schedutil.c | 115 ++++++++++++++++++++++- 4 files changed, 132 insertions(+), 7 deletions(-) diff --git a/Documentation/admin-guide/pm/cpufreq.rst b/Documentation/admin-guide/pm/cpufreq.rst index 6adb7988e0eb..fa0d602a920e 100644 --- a/Documentation/admin-guide/pm/cpufreq.rst +++ b/Documentation/admin-guide/pm/cpufreq.rst @@ -417,7 +417,7 @@ is passed by the scheduler to the governor callback which causes the frequency to go up to the allowed maximum immediately and then draw back to the value returned by the above formula over time. -This governor exposes only one tunable: +This governor exposes two tunables: ``rate_limit_us`` Minimum time (in microseconds) that has to pass between two consecutive @@ -427,6 +427,21 @@ This governor exposes only one tunable: The purpose of this tunable is to reduce the scheduler context overhead of the governor which might be excessive without it. +``respone_time_ms`` + Amount of time (in milliseconds) required to ramp the policy from + lowest to highest frequency. Can be decreased to speed up the + responsiveness of the system, or increased to slow the system down in + hope to save power. The best perf/watt will depend on the system + characteristics and the dominant workload you expect to run. For + userspace that has smart context on the type of workload running (like + in Android), one can tune this to suite the demand of that workload. + + Note that when slowing the response down, you can end up effectively + chopping off the top frequencies for that policy as the util is capped + to 1024. On HMP systems this chopping effect will only occur on the + biggest core whose capacity is 1024. Don't rely on this behavior as + this is a limitation that can hopefully be improved in the future. + This governor generally is regarded as a replacement for the older `ondemand`_ and `conservative`_ governors (described below), as it is simpler and more tightly integrated with the CPU scheduler, its overhead in terms of CPU context diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 9875284ca6e4..15c397ce3252 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -533,8 +533,8 @@ void cpufreq_disable_fast_switch(struct cpufreq_policy *policy) } EXPORT_SYMBOL_GPL(cpufreq_disable_fast_switch); -static unsigned int __resolve_freq(struct cpufreq_policy *policy, - unsigned int target_freq, unsigned int relation) +unsigned int __resolve_freq(struct cpufreq_policy *policy, + unsigned int target_freq, unsigned int relation) { unsigned int idx; diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h index 1c5ca92a0555..29c3723653a3 100644 --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h @@ -613,6 +613,9 @@ int cpufreq_driver_target(struct cpufreq_policy *policy, int __cpufreq_driver_target(struct cpufreq_policy *policy, unsigned int target_freq, unsigned int relation); +unsigned int __resolve_freq(struct cpufreq_policy *policy, + unsigned int target_freq, + unsigned int relation); unsigned int cpufreq_driver_resolve_freq(struct cpufreq_policy *policy, unsigned int target_freq); unsigned int cpufreq_policy_transition_delay_us(struct cpufreq_policy *policy); diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 1d4d6025c15f..788208becc13 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -8,9 +8,12 @@ #define IOWAIT_BOOST_MIN (SCHED_CAPACITY_SCALE / 8) +DEFINE_PER_CPU_READ_MOSTLY(unsigned long, response_time_mult); + struct sugov_tunables { struct gov_attr_set attr_set; unsigned int rate_limit_us; + unsigned int response_time_ms; }; struct sugov_policy { @@ -22,6 +25,7 @@ struct sugov_policy { raw_spinlock_t update_lock; u64 last_freq_update_time; s64 freq_update_delay_ns; + unsigned int freq_response_time_ms; unsigned int next_freq; unsigned int cached_raw_freq; @@ -59,6 +63,70 @@ static DEFINE_PER_CPU(struct sugov_cpu, sugov_cpu); /************************ Governor internals ***********************/ +static inline u64 sugov_calc_freq_response_ms(struct sugov_policy *sg_policy) +{ + int cpu = cpumask_first(sg_policy->policy->cpus); + unsigned long cap = arch_scale_cpu_capacity(cpu); + unsigned int max_freq, sec_max_freq; + + max_freq = sg_policy->policy->cpuinfo.max_freq; + sec_max_freq = __resolve_freq(sg_policy->policy, + max_freq - 1, + CPUFREQ_RELATION_H); + + /* + * We will request max_freq as soon as util crosses the capacity at + * second highest frequency. So effectively our response time is the + * util at which we cross the cap@2nd_highest_freq. + */ + cap = sec_max_freq * cap / max_freq; + + return approximate_runtime(cap + 1); +} + +static inline void sugov_update_response_time_mult(struct sugov_policy *sg_policy) +{ + unsigned long mult; + int cpu; + + if (unlikely(!sg_policy->freq_response_time_ms)) + sg_policy->freq_response_time_ms = sugov_calc_freq_response_ms(sg_policy); + + mult = sg_policy->freq_response_time_ms * SCHED_CAPACITY_SCALE; + mult /= sg_policy->tunables->response_time_ms; + + if (SCHED_WARN_ON(!mult)) + mult = SCHED_CAPACITY_SCALE; + + for_each_cpu(cpu, sg_policy->policy->cpus) + per_cpu(response_time_mult, cpu) = mult; +} + +/* + * Shrink or expand how long it takes to reach the maximum performance of the + * policy. + * + * sg_policy->freq_response_time_ms is a constant value defined by PELT + * HALFLIFE and the capacity of the policy (assuming HMP systems). + * + * sg_policy->tunables->response_time_ms is a user defined response time. By + * setting it lower than sg_policy->freq_response_time_ms, the system will + * respond faster to changes in util, which will result in reaching maximum + * performance point quicker. By setting it higher, it'll slow down the amount + * of time required to reach the maximum OPP. + * + * This should be applied when selecting the frequency. + */ +static inline unsigned long +sugov_apply_response_time(unsigned long util, int cpu) +{ + unsigned long mult; + + mult = per_cpu(response_time_mult, cpu) * util; + + return mult >> SCHED_CAPACITY_SHIFT; +} + static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time) { s64 delta_ns; @@ -156,7 +224,10 @@ unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual, unsigned long min, unsigned long max) { - /* Add dvfs headroom to actual utilization */ + /* + * Speed up/slow down response timee first then apply DVFS headroom. + */ + actual = sugov_apply_response_time(actual, cpu); actual = apply_dvfs_headroom(actual, cpu); /* Actually we don't need to target the max performance */ if (actual < max) @@ -555,8 +626,42 @@ rate_limit_us_store(struct gov_attr_set *attr_set, const char *buf, size_t count static struct governor_attr rate_limit_us = __ATTR_RW(rate_limit_us); +static ssize_t response_time_ms_show(struct gov_attr_set *attr_set, char *buf) +{ + struct sugov_tunables *tunables = to_sugov_tunables(attr_set); + + return sprintf(buf, "%u\n", tunables->response_time_ms); +} + +static ssize_t +response_time_ms_store(struct gov_attr_set *attr_set, const char *buf, size_t count) +{ + struct sugov_tunables *tunables = to_sugov_tunables(attr_set); + struct sugov_policy *sg_policy; + unsigned int response_time_ms; + + if (kstrtouint(buf, 10, &response_time_ms)) + return -EINVAL; + + /* XXX need special handling for high values? */ + + tunables->response_time_ms = response_time_ms; + + list_for_each_entry(sg_policy, &attr_set->policy_list, tunables_hook) { + if (sg_policy->tunables == tunables) { + sugov_update_response_time_mult(sg_policy); + break; + } + } + + return count; +} + +static struct governor_attr response_time_ms = __ATTR_RW(response_time_ms); + static struct attribute *sugov_attrs[] = { &rate_limit_us.attr, + &response_time_ms.attr, NULL }; ATTRIBUTE_GROUPS(sugov); @@ -744,11 +849,13 @@ static int sugov_init(struct cpufreq_policy *policy) goto stop_kthread; } - tunables->rate_limit_us = cpufreq_policy_transition_delay_us(policy); - policy->governor_data = sg_policy; sg_policy->tunables = tunables; + tunables->rate_limit_us = cpufreq_policy_transition_delay_us(policy); + tunables->response_time_ms = sugov_calc_freq_response_ms(sg_policy); + sugov_update_response_time_mult(sg_policy); + ret = kobject_init_and_add(&tunables->attr_set.kobj, &sugov_tunables_ktype, get_governor_parent_kobj(policy), "%s", schedutil_gov.name); @@ -808,7 +915,7 @@ static int sugov_start(struct cpufreq_policy *policy) void (*uu)(struct update_util_data *data, u64 time, unsigned int flags); unsigned int cpu; - sg_policy->freq_update_delay_ns = sg_policy->tunables->rate_limit_us * NSEC_PER_USEC; + sg_policy->freq_update_delay_ns = sg_policy->tunables->rate_limit_us * NSEC_PER_USEC; sg_policy->last_freq_update_time = 0; sg_policy->next_freq = 0; sg_policy->work_in_progress = false; From patchwork Fri Dec 8 00:23:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qais Yousef X-Patchwork-Id: 752422 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=layalina-io.20230601.gappssmtp.com header.i=@layalina-io.20230601.gappssmtp.com header.b="Plhi8pdn" Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com [IPv6:2a00:1450:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12B2519A1 for ; Thu, 7 Dec 2023 16:24:20 -0800 (PST) Received: by mail-wr1-x432.google.com with SMTP id ffacd0b85a97d-33348e711e0so1597327f8f.1 for ; Thu, 07 Dec 2023 16:24:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=layalina-io.20230601.gappssmtp.com; s=20230601; t=1701995059; x=1702599859; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WhR18T0pl795l1uokayZxS7knBKykZG2R0Wh7Y1uyRw=; b=Plhi8pdnebPMWe8kgb9+o9Pz1IuoMKY0oZcZmNT4Bw7kDfx01XCrudolutc3wYM39p SULrJT4zhXQ3wKWImyLraQRHi9eWRegPaKXb8GbgpVSO+4U5jNHdWEcJ0qsUN3JPjxVQ DE1Zr6o3BES+hBF1e4cWPoi1vjXGCae/tvz8z2c1LHTHPrWHVpn83X61v6oVnjKqnGC3 MMeBWO+hixdQztAmWsw0KXVvlKhm+qxg9e1he8f3cGKcE8r6FFr6wDHB5gli+zc2uzVk skv1igsi/pCaw4EqP6neufD+ewKC9P93V+fJGNtERq3f5SjZImh0gqkxyMMkC/n50Kx+ 9kQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701995059; x=1702599859; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WhR18T0pl795l1uokayZxS7knBKykZG2R0Wh7Y1uyRw=; b=C/nB3OtDozDucMBnAUFAfZZ8QAA0JK9lrpf6sM/lsMpORk43loE0z8xYDEBt6pI36q oWdKTQ9DBcyiOwALu/O/jnwfBwLV2Bjh+3G2uA0+6vh4shDg4DNzGldeJvYKJLID/2lZ 2o1XWD5X2F09hmPIOxgqZYJSH00kD9ECBgQgAvGnOiD7oFeODeU6OZogdOEbJsX/7xdA 4/X13sIJXl7/9H9MFGRPBFqcpO46NyZKa1rLtSMhHwkw9yZKdtrMdb58FJt0xED55Ozl OFLxqTU62Uz5VjJ9PANyTrfzjSnRQH8QdsMmR86CX5VGp746X47Gak2ewcREPV2HUK+T vf1Q== X-Gm-Message-State: AOJu0Ywz381CpZuqDwJbhyS4bq8NVeelKdPq/2JUJOdNW01dCVtC+M4C IQIgY9aDV/GVhIMkTAvsqh/uQg== X-Google-Smtp-Source: AGHT+IHGV+dihn4V479bZq9OG6SLrPXcDvTxVqEsxDxJhTAT29NPCNJcjozrDlu2bVsOek2GebRGZw== X-Received: by 2002:a05:600c:4b27:b0:40c:22b6:ce9d with SMTP id i39-20020a05600c4b2700b0040c22b6ce9dmr1891284wmp.141.1701995059146; Thu, 07 Dec 2023 16:24:19 -0800 (PST) Received: from airbuntu.. (host109-153-232-45.range109-153.btcentralplus.com. [109.153.232.45]) by smtp.gmail.com with ESMTPSA id u17-20020a05600c19d100b0040c1c269264sm3339653wmq.40.2023.12.07.16.24.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Dec 2023 16:24:18 -0800 (PST) From: Qais Yousef To: Ingo Molnar , Peter Zijlstra , "Rafael J. Wysocki" , Viresh Kumar , Vincent Guittot , Dietmar Eggemann Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Lukasz Luba , Wei Wang , Rick Yiu , Chung-Kai Mei , Qais Yousef Subject: [PATCH v2 8/8] sched/pelt: Introduce PELT multiplier Date: Fri, 8 Dec 2023 00:23:42 +0000 Message-Id: <20231208002342.367117-9-qyousef@layalina.io> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231208002342.367117-1-qyousef@layalina.io> References: <20231208002342.367117-1-qyousef@layalina.io> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Vincent Donnefort The new sched_pelt_multiplier boot param allows a user to set a clock multiplier to x2 or x4 (x1 being the default). This clock multiplier artificially speeds up PELT ramp up/down similarly to use a faster half-life than the default 32ms. - x1: 32ms half-life - x2: 16ms half-life - x4: 8ms half-life Internally, a new clock is created: rq->clock_task_mult. It sits in the clock hierarchy between rq->clock_task and rq->clock_pelt. The param is set as read only and can only be changed at boot time via kernel.sched_pelt_multiplier=[1, 2, 4] PELT has a big impact on the overall system response and reactiveness to change. Smaller PELT HF means it'll require less time to reach the maximum performance point of the system when the system become fully busy; and equally shorter time to go back to lowest performance point when the system goes back to idle. This faster reaction impacts both dvfs response and migration time between clusters in HMP system. Smaller PELT values are expected to give better performance at the cost of more power. Under powered systems can particularly benefit from smaller values. Powerful systems can still benefit from smaller values if they want to be tuned towards perf more and power is not the major concern for them. This combined with respone_time_ms from schedutil should give the user and sysadmin a deterministic way to control the triangular power, perf and thermals for their system. The default response_time_ms will half as PELT HF halves. Update approximate_{util_avg, runtime}() to take into account the PELT HALFLIFE multiplier. Signed-off-by: Vincent Donnefort Signed-off-by: Dietmar Eggemann [Converted from sysctl to boot param and updated commit message] Signed-off-by: Qais Yousef (Google) --- kernel/sched/core.c | 2 +- kernel/sched/pelt.c | 52 ++++++++++++++++++++++++++++++++++++++++++-- kernel/sched/pelt.h | 42 +++++++++++++++++++++++++++++++---- kernel/sched/sched.h | 1 + 4 files changed, 90 insertions(+), 7 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b4a1c8ea9e12..9c8626b4ddff 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -745,7 +745,7 @@ static void update_rq_clock_task(struct rq *rq, s64 delta) if ((irq_delta + steal) && sched_feat(NONTASK_CAPACITY)) update_irq_load_avg(rq, irq_delta + steal); #endif - update_rq_clock_pelt(rq, delta); + update_rq_clock_task_mult(rq, delta); } void update_rq_clock(struct rq *rq) diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c index 00a1b9c1bf16..0a10e56f76c7 100644 --- a/kernel/sched/pelt.c +++ b/kernel/sched/pelt.c @@ -468,6 +468,54 @@ int update_irq_load_avg(struct rq *rq, u64 running) } #endif /* CONFIG_HAVE_SCHED_AVG_IRQ */ +__read_mostly unsigned int sched_pelt_lshift; +static unsigned int sched_pelt_multiplier = 1; + +static int set_sched_pelt_multiplier(const char *val, const struct kernel_param *kp) +{ + int ret; + + ret = param_set_int(val, kp); + if (ret) + goto error; + + switch (sched_pelt_multiplier) { + case 1: + fallthrough; + case 2: + fallthrough; + case 4: + WRITE_ONCE(sched_pelt_lshift, + sched_pelt_multiplier >> 1); + break; + default: + ret = -EINVAL; + goto error; + } + + return 0; + +error: + sched_pelt_multiplier = 1; + return ret; +} + +static const struct kernel_param_ops sched_pelt_multiplier_ops = { + .set = set_sched_pelt_multiplier, + .get = param_get_int, +}; + +#ifdef MODULE_PARAM_PREFIX +#undef MODULE_PARAM_PREFIX +#endif +/* XXX: should we use sched as prefix? */ +#define MODULE_PARAM_PREFIX "kernel." +module_param_cb(sched_pelt_multiplier, &sched_pelt_multiplier_ops, &sched_pelt_multiplier, 0444); +MODULE_PARM_DESC(sched_pelt_multiplier, "PELT HALFLIFE helps control the responsiveness of the system."); +MODULE_PARM_DESC(sched_pelt_multiplier, "Accepted value: 1 32ms PELT HALIFE - roughly 200ms to go from 0 to max performance point (default)."); +MODULE_PARM_DESC(sched_pelt_multiplier, " 2 16ms PELT HALIFE - roughly 100ms to go from 0 to max performance point."); +MODULE_PARM_DESC(sched_pelt_multiplier, " 4 8ms PELT HALIFE - roughly 50ms to go from 0 to max performance point."); + /* * Approximate the new util_avg value assuming an entity has continued to run * for @delta us. @@ -482,7 +530,7 @@ unsigned long approximate_util_avg(unsigned long util, u64 delta) if (unlikely(!delta)) return util; - accumulate_sum(delta, &sa, 1, 0, 1); + accumulate_sum(delta << sched_pelt_lshift, &sa, 1, 0, 1); ___update_load_avg(&sa, 0); return sa.util_avg; @@ -494,7 +542,7 @@ unsigned long approximate_util_avg(unsigned long util, u64 delta) u64 approximate_runtime(unsigned long util) { struct sched_avg sa = {}; - u64 delta = 1024; // period = 1024 = ~1ms + u64 delta = 1024 << sched_pelt_lshift; // period = 1024 = ~1ms u64 runtime = 0; if (unlikely(!util)) diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h index 3a0e0dc28721..9b35b5072bae 100644 --- a/kernel/sched/pelt.h +++ b/kernel/sched/pelt.h @@ -61,6 +61,14 @@ static inline void cfs_se_util_change(struct sched_avg *avg) WRITE_ONCE(avg->util_est.enqueued, enqueued); } +static inline u64 rq_clock_task_mult(struct rq *rq) +{ + lockdep_assert_rq_held(rq); + assert_clock_updated(rq); + + return rq->clock_task_mult; +} + static inline u64 rq_clock_pelt(struct rq *rq) { lockdep_assert_rq_held(rq); @@ -72,7 +80,7 @@ static inline u64 rq_clock_pelt(struct rq *rq) /* The rq is idle, we can sync to clock_task */ static inline void _update_idle_rq_clock_pelt(struct rq *rq) { - rq->clock_pelt = rq_clock_task(rq); + rq->clock_pelt = rq_clock_task_mult(rq); u64_u32_store(rq->clock_idle, rq_clock(rq)); /* Paired with smp_rmb in migrate_se_pelt_lag() */ @@ -121,6 +129,27 @@ static inline void update_rq_clock_pelt(struct rq *rq, s64 delta) rq->clock_pelt += delta; } +extern unsigned int sched_pelt_lshift; + +/* + * absolute time |1 |2 |3 |4 |5 |6 | + * @ mult = 1 --------****************--------****************- + * @ mult = 2 --------********----------------********--------- + * @ mult = 4 --------****--------------------****------------- + * clock task mult + * @ mult = 2 | | |2 |3 | | | | |5 |6 | | | + * @ mult = 4 | | | | |2|3| | | | | | | | | | |5|6| | | | | | | + * + */ +static inline void update_rq_clock_task_mult(struct rq *rq, s64 delta) +{ + delta <<= READ_ONCE(sched_pelt_lshift); + + rq->clock_task_mult += delta; + + update_rq_clock_pelt(rq, delta); +} + /* * When rq becomes idle, we have to check if it has lost idle time * because it was fully busy. A rq is fully used when the /Sum util_sum @@ -147,7 +176,7 @@ static inline void update_idle_rq_clock_pelt(struct rq *rq) * rq's clock_task. */ if (util_sum >= divider) - rq->lost_idle_time += rq_clock_task(rq) - rq->clock_pelt; + rq->lost_idle_time += rq_clock_task_mult(rq) - rq->clock_pelt; _update_idle_rq_clock_pelt(rq); } @@ -218,13 +247,18 @@ update_irq_load_avg(struct rq *rq, u64 running) return 0; } -static inline u64 rq_clock_pelt(struct rq *rq) +static inline u64 rq_clock_task_mult(struct rq *rq) { return rq_clock_task(rq); } +static inline u64 rq_clock_pelt(struct rq *rq) +{ + return rq_clock_task_mult(rq); +} + static inline void -update_rq_clock_pelt(struct rq *rq, s64 delta) { } +update_rq_clock_task_mult(struct rq *rq, s64 delta) { } static inline void update_idle_rq_clock_pelt(struct rq *rq) { } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index bbece0eb053a..a7c89c623250 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1029,6 +1029,7 @@ struct rq { u64 clock; /* Ensure that all clocks are in the same cache line */ u64 clock_task ____cacheline_aligned; + u64 clock_task_mult; u64 clock_pelt; unsigned long lost_idle_time; u64 clock_pelt_idle;