From patchwork Sun Aug 27 23:31:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qais Yousef X-Patchwork-Id: 717686 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16200C83F16 for ; Sun, 27 Aug 2023 23:32:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229577AbjH0XcY (ORCPT ); Sun, 27 Aug 2023 19:32:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229576AbjH0XcU (ORCPT ); Sun, 27 Aug 2023 19:32:20 -0400 Received: from mail-wr1-x42f.google.com (mail-wr1-x42f.google.com [IPv6:2a00:1450:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 494C4D9 for ; Sun, 27 Aug 2023 16:32:17 -0700 (PDT) Received: by mail-wr1-x42f.google.com with SMTP id ffacd0b85a97d-31c6cd238e0so2287167f8f.0 for ; Sun, 27 Aug 2023 16:32:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=layalina-io.20221208.gappssmtp.com; s=20221208; t=1693179136; x=1693783936; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TUiAZLE25UP5PNu+kEwnlg+iVrOjDSgPGFYS2eQl6Qk=; b=1lYLM9jj8BcntjwJapi8Vs1a7DRwDl9/ZkXfaYc8TeBsAomajDtHaHsi80//pkWlgV 3qwsTBk+/7iTUhe2/tsHwUSW1gKFeYCekkOEXRaM/KHZGUVrj79GYKHYlgviAfRdAaSe zPPHu8x0CfqtcSnYpK1+rTe4V8sTjR/euYZ3/LLQC9A4LLpazdkeltIjC//VPTPe3cz/ rTdZa+olX7nbsUA+zKk+/JbLOtyb/rm117t3wQqjMZCcMPaFtxrLUvisruFhsGOwNojS Kh2leDDoykJ33nPt1HZkRi/NhKsZjfhOToQ4UWZZ6XLNjdFuflEtIJjQs48C1k0cJ4Xl 03Yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693179136; x=1693783936; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TUiAZLE25UP5PNu+kEwnlg+iVrOjDSgPGFYS2eQl6Qk=; b=UeAphttamPhMOHNDhcbh6vRbpiF/4LUIDGHOfw7+lO3IFfEDG5DEg7qjkdYflaUVE8 pPIekhM59Kqujo+da180RpQS4tPj/dF0hFXxPHT78SvbHNcrUoWiLj7sAf7iHtqlIKlR t15regOOrwfFDyMa2v3WOEdtS8D1odMZYA7nLBjCVtgsYxUAFOnWJdoVAtnIMDoxteKK h4iFDzl9NUu1OKEm4FPoL89NJmTZNXSlfux3yuS1ACG1GlnKE3YhnWbL1rIfuoW2278z sn7js70YvpQWshR7lKQkXAjcvZuqbQdRiQsClbSXSy2Kwz7cCbypSQy9SUcisLMntwbt RmKw== X-Gm-Message-State: AOJu0YztMcgIlbyqj13ojI2etSfn+OOVteQptbtHLYm+80c259XzYqQV Pin0uVuq2xCVY17xrPwrFQdHVw== X-Google-Smtp-Source: AGHT+IGgPBy9e8n3XpdI/QU8N5OZg9NT4gK9vR0sAWjDjSmuMs1dflKI1vrkQPwpTbVkx8My7SAbQg== X-Received: by 2002:adf:f54a:0:b0:317:597f:3aa6 with SMTP id j10-20020adff54a000000b00317597f3aa6mr20505805wrp.18.1693179135312; Sun, 27 Aug 2023 16:32:15 -0700 (PDT) Received: from airbuntu.. (host109-151-228-137.range109-151.btcentralplus.com. [109.151.228.137]) by smtp.gmail.com with ESMTPSA id 21-20020a05600c029500b003fe1a96845bsm12220395wmk.2.2023.08.27.16.32.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Aug 2023 16:32:14 -0700 (PDT) From: Qais Yousef To: Ingo Molnar , Peter Zijlstra , "Rafael J. Wysocki" , Viresh Kumar , Vincent Guittot , Dietmar Eggemann Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Lukasz Luba , Qais Yousef Subject: [RFC PATCH 1/7] sched/pelt: Add a new function to approximate the future util_avg value Date: Mon, 28 Aug 2023 00:31:57 +0100 Message-Id: <20230827233203.1315953-2-qyousef@layalina.io> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230827233203.1315953-1-qyousef@layalina.io> References: <20230827233203.1315953-1-qyousef@layalina.io> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Given a util_avg value, the new function will return the future one given a runtime delta. This will be useful in later patches to help replace some magic margins with more deterministic behavior. Signed-off-by: Qais Yousef (Google) --- kernel/sched/pelt.c | 22 +++++++++++++++++++++- kernel/sched/sched.h | 3 +++ 2 files changed, 24 insertions(+), 1 deletion(-) diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c index 0f310768260c..50322005a0ae 100644 --- a/kernel/sched/pelt.c +++ b/kernel/sched/pelt.c @@ -466,4 +466,24 @@ int update_irq_load_avg(struct rq *rq, u64 running) return ret; } -#endif +#endif /* CONFIG_HAVE_SCHED_AVG_IRQ */ + +/* + * Approximate the new util_avg value assuming an entity has continued to run + * for @delta us. + */ +unsigned long approximate_util_avg(unsigned long util, u64 delta) +{ + struct sched_avg sa = { + .util_sum = util * PELT_MIN_DIVIDER, + .util_avg = util, + }; + + if (unlikely(!delta)) + return util; + + accumulate_sum(delta, &sa, 0, 0, 1); + ___update_load_avg(&sa, 0); + + return sa.util_avg; +} diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 56eeb5b05b50..5f76b8a75a9f 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2997,6 +2997,9 @@ enum cpu_util_type { unsigned long effective_cpu_util(int cpu, unsigned long util_cfs, enum cpu_util_type type, struct task_struct *p); + +unsigned long approximate_util_avg(unsigned long util, u64 delta); + /* * DVFS decision are made at discrete points. If CPU stays busy, the util will * continue to grow, which means it could need to run at a higher frequency From patchwork Sun Aug 27 23:31:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qais Yousef X-Patchwork-Id: 717685 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27E11C83F17 for ; Sun, 27 Aug 2023 23:32:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229534AbjH0XcY (ORCPT ); Sun, 27 Aug 2023 19:32:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59156 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229577AbjH0XcU (ORCPT ); Sun, 27 Aug 2023 19:32:20 -0400 Received: from mail-wm1-x336.google.com (mail-wm1-x336.google.com [IPv6:2a00:1450:4864:20::336]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F5E3106 for ; Sun, 27 Aug 2023 16:32:18 -0700 (PDT) Received: by mail-wm1-x336.google.com with SMTP id 5b1f17b1804b1-401187f8071so16394715e9.0 for ; Sun, 27 Aug 2023 16:32:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=layalina-io.20221208.gappssmtp.com; s=20221208; t=1693179136; x=1693783936; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=t2o3qJKDZS4dFtdaQhs245LyTqmuk7rvrkvinnh/CtI=; b=JvaSnzVw+G2fl49nyDo/YXbHUkfFUiTIHGuH+Sudr/on+fzDWXe5PtGWMmbcdUSKCq Mf8RtGMPZZLRXqf2wjmmtBs2+ibHX56kO+M9YmRzSWNxfOkMbxFJB/HFpLHJ5XN6p7Tp AvzluRDawzd8QiBhAxRBjN4ZOT6ZsCtE7QI07hjIVw2YLAZKltwHvlKTnGLohChwWvvV gvG4fpq7cNdmqY2o9HvkbkDEh1kAX05wnACaqNdj7//7c8sQV8MbOmLpWibKezs9YZUP pGCD4ei3HjM04fyxhorlv2Fi1Dnxi8saH1sJIUyo9vhrLk3bxTug+aGgiZ+L5HkibPQh 6e+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693179136; x=1693783936; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=t2o3qJKDZS4dFtdaQhs245LyTqmuk7rvrkvinnh/CtI=; b=l0eXJ4C4eECm3aBlxr9pX9CO5R3vlr0Ij5sZIXU3ngBygrhYBawgJr5BaOlnCPIjN0 1yULVbqQFGrmL1JEQt0dJkrYFW3QGk14j11KvDR6+UMpxde2DpCZUOAOJThx3Bp1s7xC qV9PoT3KoiD4QDTibtWqbFAgsPizkP7fvreflhaurC0rBvyiHnSIpr3+VhSEQbNSb3yA LpktBmlNqEU7xQ0DeGgcNvv4qUEo+72egwvsFfTSiCWXOoMl5KKAq3pIlhwNm+ht2Yti 34XABlfnawmaBbKKirNPKoU/AR2HW51ix0g8e/RswwYg+yPrMQauVEM/d0yUPPGA3+QM ikPQ== X-Gm-Message-State: AOJu0Ywxk2FP2/TCZ2FlfEcqqHGw3SO64d2mix4Gl+soqU1k1Ov7qbXL 3W894de0F3HHWX6a+m71VLTP0Q== X-Google-Smtp-Source: AGHT+IG8+eLA6cvLBCh+GBlr/ZzPkhgG/yiuOJFhbmuRrvZ/TDbhFzbzUvp81jW1aWxqQn1XE/6lNw== X-Received: by 2002:a05:600c:b42:b0:3fe:1fd9:bedf with SMTP id k2-20020a05600c0b4200b003fe1fd9bedfmr18992148wmr.11.1693179136213; Sun, 27 Aug 2023 16:32:16 -0700 (PDT) Received: from airbuntu.. (host109-151-228-137.range109-151.btcentralplus.com. [109.151.228.137]) by smtp.gmail.com with ESMTPSA id 21-20020a05600c029500b003fe1a96845bsm12220395wmk.2.2023.08.27.16.32.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Aug 2023 16:32:15 -0700 (PDT) From: Qais Yousef To: Ingo Molnar , Peter Zijlstra , "Rafael J. Wysocki" , Viresh Kumar , Vincent Guittot , Dietmar Eggemann Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Lukasz Luba , Qais Yousef Subject: [RFC PATCH 2/7] sched/pelt: Add a new function to approximate runtime to reach given util Date: Mon, 28 Aug 2023 00:31:58 +0100 Message-Id: <20230827233203.1315953-3-qyousef@layalina.io> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230827233203.1315953-1-qyousef@layalina.io> References: <20230827233203.1315953-1-qyousef@layalina.io> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org It is basically the ramp-up time from 0 to a given value. Will be used later to implement new tunable to control response time for schedutil. Signed-off-by: Qais Yousef (Google) --- kernel/sched/pelt.c | 21 +++++++++++++++++++++ kernel/sched/sched.h | 1 + 2 files changed, 22 insertions(+) diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c index 50322005a0ae..f673b9ab92dc 100644 --- a/kernel/sched/pelt.c +++ b/kernel/sched/pelt.c @@ -487,3 +487,24 @@ unsigned long approximate_util_avg(unsigned long util, u64 delta) return sa.util_avg; } + +/* + * Approximate the required amount of runtime in ms required to reach @util. + */ +u64 approximate_runtime(unsigned long util) +{ + struct sched_avg sa = {}; + u64 delta = 1024; // period = 1024 = ~1ms + u64 runtime = 0; + + if (unlikely(!util)) + return runtime; + + while (sa.util_avg < util) { + accumulate_sum(delta, &sa, 0, 0, 1); + ___update_load_avg(&sa, 0); + runtime++; + } + + return runtime; +} diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 5f76b8a75a9f..2b889ad399de 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2999,6 +2999,7 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs, struct task_struct *p); unsigned long approximate_util_avg(unsigned long util, u64 delta); +u64 approximate_runtime(unsigned long util); /* * DVFS decision are made at discrete points. If CPU stays busy, the util will From patchwork Sun Aug 27 23:31:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qais Yousef X-Patchwork-Id: 718112 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04810C83F11 for ; Sun, 27 Aug 2023 23:32:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229506AbjH0XcX (ORCPT ); Sun, 27 Aug 2023 19:32:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229591AbjH0XcV (ORCPT ); Sun, 27 Aug 2023 19:32:21 -0400 Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com [IPv6:2a00:1450:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86292BC for ; Sun, 27 Aug 2023 16:32:18 -0700 (PDT) Received: by mail-wm1-x32b.google.com with SMTP id 5b1f17b1804b1-3fef4b063a7so25449965e9.2 for ; Sun, 27 Aug 2023 16:32:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=layalina-io.20221208.gappssmtp.com; s=20221208; t=1693179137; x=1693783937; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=R5bbOUNHimZNEIeEbsqR6Op3Ym6DwMUs69DrKTMvK38=; b=gDEUL0lSGmBoaPWfsdt3TIy8RSGSnjDc+yuy6jOtCoQs3riTpFg5BcpAg/47t5ETzA uBEBzkWGVCP4rmYz+AG5I7SjwiA9c/DKIO8ce/UbRfn8vZWGo0wWUr4yDFlQvTKut4eg lP3wAziUEizy9Jxu8vPPufPekOxYHt2oTwnTa/b4Qo5f7Jk+D7DjlE3YDfTCSoo5AXdC /bPBxjmM79oD0K37l4OWKzrkMmOb4hud30gdwALN7Mz3swQDfSm/SHint0Ps+6zGXIwu bB3ulNvSk/RkB9hOlHnfkdZO3HOppjPSxbrjVi4S7stzgZwp2uuE35P3TZj46EaAIfNo LAFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693179137; x=1693783937; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=R5bbOUNHimZNEIeEbsqR6Op3Ym6DwMUs69DrKTMvK38=; b=OuSSlxk4EwVbz2Gtl9TUZnx8gjV7BFJ/Zfw+8xx+mucnqDQbtyv8v1J+SDVKgGUtbq 3AZhj5T5SPmk0oExDBdH8DW6tcfLccUSAKY0Fscq0JpSdaPj+NVsbdT368jv1+en5/dU XeNd04TMmv7SyQvDz7uk7bDI0ZuVvXEf/ucsNSH3QmZsZqG9Dw7y6Ek94sHFpiQkRFU4 sJaHljVIqzOkRogG37j8KPz9boYrorp3mPhdEM29h7BCLZn6gi0uSw1pq0kcQbuB5RA8 HijYvr0eDI9J3cvVaj37Cy0YcJjUXz0pUKPuQpd38pL2sfoI9isKsfNRN3j9DigFhOUq XwZQ== X-Gm-Message-State: AOJu0YzNd/OD8zNcYalgOgJPqNHgAzHkvGSbXhtBOTr8vmb/4U4432Lg SXCqRAHOlpkziVZdxGEZnMFOtBzA+h65s3MiSwg= X-Google-Smtp-Source: AGHT+IHYSiwQ3pMeW83Szi6hCgZxKzkWheiaQklAzkUroyHg+F/5J5GvycGMIWaS2P3qj3to1YJnLQ== X-Received: by 2002:a7b:ce11:0:b0:401:c636:8f4c with SMTP id m17-20020a7bce11000000b00401c6368f4cmr2111090wmc.3.1693179136997; Sun, 27 Aug 2023 16:32:16 -0700 (PDT) Received: from airbuntu.. (host109-151-228-137.range109-151.btcentralplus.com. [109.151.228.137]) by smtp.gmail.com with ESMTPSA id 21-20020a05600c029500b003fe1a96845bsm12220395wmk.2.2023.08.27.16.32.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Aug 2023 16:32:16 -0700 (PDT) From: Qais Yousef To: Ingo Molnar , Peter Zijlstra , "Rafael J. Wysocki" , Viresh Kumar , Vincent Guittot , Dietmar Eggemann Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Lukasz Luba , Qais Yousef Subject: [RFC PATCH 3/7] sched/fair: Remove magic margin in fits_capacity() Date: Mon, 28 Aug 2023 00:31:59 +0100 Message-Id: <20230827233203.1315953-4-qyousef@layalina.io> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230827233203.1315953-1-qyousef@layalina.io> References: <20230827233203.1315953-1-qyousef@layalina.io> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org 80% margin is a magic value that has served its purpose for now, but it no longer fits the variety of systems exist today. If a system is over powered specifically, this 80% will mean we leave a lot of capacity unused before we decide to upmigrate on HMP system. The upmigration behavior should rely on the fact that a bad decision made will need load balance to kick in to perform misfit migration. And I think this is an adequate definition for what to consider as enough headroom to consider whether a util fits capacity or not. Use the new approximate_util_avg() function to predict the util if the task continues to run for TICK_US. If the value is not strictly less than the capacity, then it must not be placed there, ie considered misfit. Signed-off-by: Qais Yousef (Google) --- kernel/sched/fair.c | 21 ++++++++++++++++++--- 1 file changed, 18 insertions(+), 3 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0b7445cd5af9..facbf3eb7141 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -109,16 +109,31 @@ int __weak arch_asym_cpu_priority(int cpu) } /* - * The margin used when comparing utilization with CPU capacity. + * The util will fit the capacity if it has enough headroom to grow within the + * next tick - which is when any load balancing activity happens to do the + * correction. * - * (default: ~20%) + * If util stays within the capacity before tick has elapsed, then it should be + * fine. If not, then a correction action must happen shortly after it starts + * running, hence we treat it as !fit. + * + * TODO: TICK is not actually accurate enough. balance_interval is the correct + * one to use as the next load balance doesn't not happen religiously at tick. + * Accessing balance_interval might be tricky and will require some refactoring + * first. */ -#define fits_capacity(cap, max) ((cap) * 1280 < (max) * 1024) +static inline bool fits_capacity(unsigned long util, unsigned long capacity) +{ + return approximate_util_avg(util, TICK_USEC) < capacity; +} /* * The margin used when comparing CPU capacities. * is 'cap1' noticeably greater than 'cap2' * + * TODO: use approximate_util_avg() to give something more quantifiable based + * on time? Like 1ms? + * * (default: ~5%) */ #define capacity_greater(cap1, cap2) ((cap1) * 1024 > (cap2) * 1078) From patchwork Sun Aug 27 23:32:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qais Yousef X-Patchwork-Id: 717684 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BCACC83F1B for ; Sun, 27 Aug 2023 23:32:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229602AbjH0XcZ (ORCPT ); Sun, 27 Aug 2023 19:32:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58732 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229600AbjH0XcV (ORCPT ); Sun, 27 Aug 2023 19:32:21 -0400 Received: from mail-wm1-x336.google.com (mail-wm1-x336.google.com [IPv6:2a00:1450:4864:20::336]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F90AC4 for ; Sun, 27 Aug 2023 16:32:19 -0700 (PDT) Received: by mail-wm1-x336.google.com with SMTP id 5b1f17b1804b1-401c90ed2ecso5120395e9.0 for ; Sun, 27 Aug 2023 16:32:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=layalina-io.20221208.gappssmtp.com; s=20221208; t=1693179138; x=1693783938; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fFCzoEj222YOvKZmu/QASlOstFmAq8cTeKzq6GBcqv8=; b=oPeuQP9SR40dh1ZhVoOKwREqhU5VDqMV9TCmS1XyyVy4V5Kc+3W1k9m5/q6La7UVCK Why0ApkB6chSrJJ+2Lh8gnceaw6vPkvGh9icr45DvKaz+mQ/ejApMZU4AZgnGcOzfW1g EaXxyT650SX088zB2MEwb2UfI2pzGKgKZju81jl0XGajyaQHsqURXL5abWK5yXtBo38Y CpD1g6GopDdvnirmFPjmJoNqexrA+Eeel7uxylyRxnJJRcUgrDW9oBseuNyxAEOFBZE4 fYb6sMXwoW/UCp4DjtDLXcQ7fvtLFLoLzgYjn2n3mpVx2c+8DA/w9eZBIbpSOgcWVZQd KmGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693179138; x=1693783938; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fFCzoEj222YOvKZmu/QASlOstFmAq8cTeKzq6GBcqv8=; b=Mbwi/cAWIUqhFcFvV5w3z58QNBJF4q7pdO6p19E/IkdrhRNuS1bi1DHg6GmvtD/Zzg O4BWbJWLeWQ+7FWV+G9+eMoqYzbnbnMgg++2iii3yWWxDKHeLv4vf5l9vrxF/vlCotMW XBETOdzOomQByRmEHrSE2FJ4Y3+qw3Xb6cFyi6vdbz7q8Kh6Gxap5WsA7ctqZUWWNml2 pb73qorWnsNyXdhqIJ8vnrWS711fJiF/+Q/SPGjefGAqG9eZ+CEhMFjXL4tLxEDFq4Uc nboKjwhyPEXtYCi3wFfkQ8jpPi/ot6PBtE6wfqpVTdDSLYioiUSlOCLjd44MPqQxSgSm NNKw== X-Gm-Message-State: AOJu0Yzrzxs52v1lXUOrCVhN+VoJ8n0Zj8SfKMzHPP0K+EOyaw2yPaFf GFm9wpC2ADOswH0R5pHJnmUrgQ== X-Google-Smtp-Source: AGHT+IGmJ1ukE6kgXP4BwFh90Slk9aIfq9ysM3sXxR7HHWBVq7U3l502sYX87//+jFgXaGb8io1fSQ== X-Received: by 2002:a05:600c:3653:b0:401:b0f2:88cf with SMTP id y19-20020a05600c365300b00401b0f288cfmr6665902wmq.40.1693179137900; Sun, 27 Aug 2023 16:32:17 -0700 (PDT) Received: from airbuntu.. (host109-151-228-137.range109-151.btcentralplus.com. [109.151.228.137]) by smtp.gmail.com with ESMTPSA id 21-20020a05600c029500b003fe1a96845bsm12220395wmk.2.2023.08.27.16.32.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Aug 2023 16:32:17 -0700 (PDT) From: Qais Yousef To: Ingo Molnar , Peter Zijlstra , "Rafael J. Wysocki" , Viresh Kumar , Vincent Guittot , Dietmar Eggemann Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Lukasz Luba , Qais Yousef Subject: [RFC PATCH 4/7] sched: cpufreq: Remove magic 1.25 headroom from apply_dvfs_headroom() Date: Mon, 28 Aug 2023 00:32:00 +0100 Message-Id: <20230827233203.1315953-5-qyousef@layalina.io> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230827233203.1315953-1-qyousef@layalina.io> References: <20230827233203.1315953-1-qyousef@layalina.io> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Instead of the magical 1.25 headroom, use the new approximate_util_avg() to provide headroom based on the dvfs_update_delay; which is the period at which the cpufreq governor will send DVFS updates to the hardware. Add a new percpu dvfs_update_delay that can be cheaply accessed whenever apply_dvfs_headroom() is called. We expect cpufreq governors that rely on util to drive its DVFS logic/algorithm to populate these percpu variables. schedutil is the only such governor at the moment. Signed-off-by: Qais Yousef (Google) --- kernel/sched/core.c | 3 ++- kernel/sched/cpufreq_schedutil.c | 10 +++++++++- kernel/sched/sched.h | 25 ++++++++++++++----------- 3 files changed, 25 insertions(+), 13 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 602e369753a3..f56eb44745a8 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -116,6 +116,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_util_est_se_tp); EXPORT_TRACEPOINT_SYMBOL_GPL(sched_update_nr_running_tp); DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); +DEFINE_PER_CPU_SHARED_ALIGNED(u64, dvfs_update_delay); #ifdef CONFIG_SCHED_DEBUG /* @@ -7439,7 +7440,7 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs, * frequency will be gracefully reduced with the utilization decay. */ if (type == FREQUENCY_UTIL) { - util = apply_dvfs_headroom(util_cfs) + cpu_util_rt(rq); + util = apply_dvfs_headroom(util_cfs, cpu) + cpu_util_rt(rq); util = uclamp_rq_util_with(rq, util, p); } else { util = util_cfs + cpu_util_rt(rq); diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 0c7565ac31fb..04aa06846f31 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -519,15 +519,21 @@ rate_limit_us_store(struct gov_attr_set *attr_set, const char *buf, size_t count struct sugov_tunables *tunables = to_sugov_tunables(attr_set); struct sugov_policy *sg_policy; unsigned int rate_limit_us; + int cpu; if (kstrtouint(buf, 10, &rate_limit_us)) return -EINVAL; tunables->rate_limit_us = rate_limit_us; - list_for_each_entry(sg_policy, &attr_set->policy_list, tunables_hook) + list_for_each_entry(sg_policy, &attr_set->policy_list, tunables_hook) { + sg_policy->freq_update_delay_ns = rate_limit_us * NSEC_PER_USEC; + for_each_cpu(cpu, sg_policy->policy->cpus) + per_cpu(dvfs_update_delay, cpu) = rate_limit_us; + } + return count; } @@ -772,6 +778,8 @@ static int sugov_start(struct cpufreq_policy *policy) memset(sg_cpu, 0, sizeof(*sg_cpu)); sg_cpu->cpu = cpu; sg_cpu->sg_policy = sg_policy; + + per_cpu(dvfs_update_delay, cpu) = sg_policy->tunables->rate_limit_us; } if (policy_is_shared(policy)) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 2b889ad399de..e06e512af192 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3001,6 +3001,15 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs, unsigned long approximate_util_avg(unsigned long util, u64 delta); u64 approximate_runtime(unsigned long util); +/* + * Any governor that relies on util signal to drive DVFS, must populate these + * percpu dvfs_update_delay variables. + * + * It should describe the rate/delay at which the governor sends DVFS freq + * update to the hardware in us. + */ +DECLARE_PER_CPU_SHARED_ALIGNED(u64, dvfs_update_delay); + /* * DVFS decision are made at discrete points. If CPU stays busy, the util will * continue to grow, which means it could need to run at a higher frequency @@ -3010,20 +3019,14 @@ u64 approximate_runtime(unsigned long util); * to run at adequate performance point. * * This function provides enough headroom to provide adequate performance - * assuming the CPU continues to be busy. - * - * At the moment it is a constant multiplication with 1.25. + * assuming the CPU continues to be busy. This headroom is based on the + * dvfs_update_delay of the cpufreq governor. * - * TODO: The headroom should be a function of the delay. 25% is too high - * especially on powerful systems. For example, if the delay is 500us, it makes - * more sense to give a small headroom as the next decision point is not far - * away and will follow the util if it continues to rise. On the other hand if - * the delay is 10ms, then we need a bigger headroom so the CPU won't struggle - * at a lower frequency if it never goes to idle until then. + * XXX: Should we provide headroom when the util is decaying? */ -static inline unsigned long apply_dvfs_headroom(unsigned long util) +static inline unsigned long apply_dvfs_headroom(unsigned long util, int cpu) { - return util + (util >> 2); + return approximate_util_avg(util, per_cpu(dvfs_update_delay, cpu)); } /* From patchwork Sun Aug 27 23:32:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qais Yousef X-Patchwork-Id: 718113 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF633C83F14 for ; Sun, 27 Aug 2023 23:32:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229513AbjH0XcY (ORCPT ); Sun, 27 Aug 2023 19:32:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229602AbjH0XcX (ORCPT ); Sun, 27 Aug 2023 19:32:23 -0400 Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com [IPv6:2a00:1450:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58EC5BC for ; Sun, 27 Aug 2023 16:32:20 -0700 (PDT) Received: by mail-wm1-x32b.google.com with SMTP id 5b1f17b1804b1-3ff1c397405so25572455e9.3 for ; Sun, 27 Aug 2023 16:32:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=layalina-io.20221208.gappssmtp.com; s=20221208; t=1693179139; x=1693783939; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aKKysv7DeWgPbTOyNyifgpcbSUovrPlrZuMk8nNVMOo=; b=dFgXEbgdDCk8PUU90zCUl0B8csLXO17vnjUIxPsdtE8J47/h8mcK13ZrFFzAh+tiqZ PF54exyacLV7X92B/oLbETb3PzAeqC4zzyiJZAzalwS7xuVoBko4njA7wbU7xhOPS1aQ 1AH+oDmWzaBZi2Wyu74x5RwgFEPPZco5BiDcGlz2kV/Ws3IblSlm/6mWJYG/Bo+RaRhO KC4cpw58xcVuHa08WI3aRKsCtz2iIJG/usDO23U5RbwVZbE+2/Ya7IzdUY0hMF19WrR9 9lBB4+HDGKCkOJ71CpoITQpIGocTiBtfJzhqJwse/XW+FQ9JGWM39yFHYkAJ/zxAqHPY CrIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693179139; x=1693783939; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aKKysv7DeWgPbTOyNyifgpcbSUovrPlrZuMk8nNVMOo=; b=UHCPJLhV1h/oZC8mlws/wetjeRqII8TuljrOXM18CGvKMvmlNiPQzSPtjPpjp5axZS F3iwMpSmh1WGwoRocV7t3s7qgXsOQO68g3Bvpb03vO0uWzsd1Dm0K9vqGhFkHXmZSUQj C5KAI5FvKqdYM2f6wd2S/mr62Iepc+Nlf+quSEkgN2uxXre4PEGIqYXmfaGI1k2kRaRL 5q3EsKTL+ff7Qqv+ct1de2fMnLN29dX9IxI/Dsz+1UYeiewpEJxQr/VXvf9vh8sm/J5O Y+FTgkXi9IahNV2fNjV/pUGoEJ1mYGcVb2l40jYF7thHILMDLZbuhR1Sn7zouu6J0kAE yq2g== X-Gm-Message-State: AOJu0YyyJqGEwU4rBXu0x+EvUlNdCdDnM32i3jRfL/wlbV1ecMoOfI24 fUBs1neaWLwk6f2RiHZEDj6jDA== X-Google-Smtp-Source: AGHT+IF9StEg0qazpZWzlZFuMzkvYcK1GqvIslrZtcKvNrbYJs1UMmIl4VRr/NlseTjAs9Khgatvkg== X-Received: by 2002:a05:600c:ac8:b0:401:bd2e:49f2 with SMTP id c8-20020a05600c0ac800b00401bd2e49f2mr4200758wmr.0.1693179138917; Sun, 27 Aug 2023 16:32:18 -0700 (PDT) Received: from airbuntu.. (host109-151-228-137.range109-151.btcentralplus.com. [109.151.228.137]) by smtp.gmail.com with ESMTPSA id 21-20020a05600c029500b003fe1a96845bsm12220395wmk.2.2023.08.27.16.32.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Aug 2023 16:32:18 -0700 (PDT) From: Qais Yousef To: Ingo Molnar , Peter Zijlstra , "Rafael J. Wysocki" , Viresh Kumar , Vincent Guittot , Dietmar Eggemann Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Lukasz Luba , Qais Yousef Subject: [RFC PATCH 5/7] sched/schedutil: Add a new tunable to dictate response time Date: Mon, 28 Aug 2023 00:32:01 +0100 Message-Id: <20230827233203.1315953-6-qyousef@layalina.io> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230827233203.1315953-1-qyousef@layalina.io> References: <20230827233203.1315953-1-qyousef@layalina.io> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org The new tunable, response_time_ms, allow us to speed up or slow down the response time of the policy to meet the perf, power and thermal characteristic desired by the user/sysadmin. There's no single universal trade-off that we can apply for all systems even if they use the same SoC. The form factor of the system, the dominant use case, and in case of battery powered systems, the size of the battery and presence or absence of active cooling can play a big role on what would be best to use. The new tunable provides sensible defaults, but yet gives the power to control the response time to the user/sysadmin, if they wish to. This tunable is applied when we map the util into frequency. TODO: to retain previous behavior, we must multiply default time with 80%.. Signed-off-by: Qais Yousef (Google) --- Documentation/admin-guide/pm/cpufreq.rst | 19 ++++++- kernel/sched/cpufreq_schedutil.c | 70 +++++++++++++++++++++++- 2 files changed, 87 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/pm/cpufreq.rst b/Documentation/admin-guide/pm/cpufreq.rst index 6adb7988e0eb..c43df0e716a7 100644 --- a/Documentation/admin-guide/pm/cpufreq.rst +++ b/Documentation/admin-guide/pm/cpufreq.rst @@ -417,7 +417,7 @@ is passed by the scheduler to the governor callback which causes the frequency to go up to the allowed maximum immediately and then draw back to the value returned by the above formula over time. -This governor exposes only one tunable: +This governor exposes two tunables: ``rate_limit_us`` Minimum time (in microseconds) that has to pass between two consecutive @@ -427,6 +427,23 @@ This governor exposes only one tunable: The purpose of this tunable is to reduce the scheduler context overhead of the governor which might be excessive without it. +``respone_time_ms`` + Amount of time (in milliseconds) required to ramp the policy from + lowest to highest frequency. Can be decreased to speed up the + responsiveness of the system, or increased to slow the system down in + hope to save power. The best perf/watt will depend on the system + characteristics and the dominant workload you expect to run. For + userspace that has smart context on the type of workload running (like + in Android), one can tune this to suite the demand of that workload. + + Note that when slowing the response down, you can end up effectively + chopping off the top frequencies for that policy as the util is capped + to 1024. On HMP systems where some CPUs have a capacity less than 1024, + unless affinity is used, the task would have probably migrated to + a bigger core before you reach the max performance of the policy. If + they're locked to that policy, then they should reach the max + performance after the specified time. + This governor generally is regarded as a replacement for the older `ondemand`_ and `conservative`_ governors (described below), as it is simpler and more tightly integrated with the CPU scheduler, its overhead in terms of CPU context diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 04aa06846f31..42f4c4100902 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -11,6 +11,7 @@ struct sugov_tunables { struct gov_attr_set attr_set; unsigned int rate_limit_us; + unsigned int response_time_ms; }; struct sugov_policy { @@ -22,6 +23,7 @@ struct sugov_policy { raw_spinlock_t update_lock; u64 last_freq_update_time; s64 freq_update_delay_ns; + unsigned int freq_response_time_ms; unsigned int next_freq; unsigned int cached_raw_freq; @@ -59,6 +61,45 @@ static DEFINE_PER_CPU(struct sugov_cpu, sugov_cpu); /************************ Governor internals ***********************/ +static inline u64 sugov_calc_freq_response_ms(struct sugov_policy *sg_policy) +{ + int cpu = cpumask_first(sg_policy->policy->cpus); + unsigned long cap = capacity_orig_of(cpu); + + return approximate_runtime(cap); +} + +/* + * Shrink or expand how long it takes to reach the maximum performance of the + * policy. + * + * sg_policy->freq_response_time_ms is a constant value defined by PELT + * HALFLIFE and the capacity of the policy (assuming HMP systems). + * + * sg_policy->tunables->response_time_ms is a user defined response time. By + * setting it lower than sg_policy->freq_response_time_ms, the system will + * respond faster to changes in util, which will result in reaching maximum + * performance point quicker. By setting it higher, it'll slow down the amount + * of time required to reach the maximum OPP. + * + * This should be applied when selecting the frequency. By default no + * conversion is done and we should return util as-is. + */ +static inline unsigned long +sugov_apply_response_time(struct sugov_policy *sg_policy, unsigned long util) +{ + unsigned long mult; + + if (sg_policy->freq_response_time_ms == sg_policy->tunables->response_time_ms) + return util; + + mult = sg_policy->freq_response_time_ms * SCHED_CAPACITY_SCALE; + mult /= sg_policy->tunables->response_time_ms; + mult *= util; + + return mult >> SCHED_CAPACITY_SHIFT; +} + static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time) { s64 delta_ns; @@ -143,6 +184,7 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy, unsigned int freq = arch_scale_freq_invariant() ? policy->cpuinfo.max_freq : policy->cur; + util = sugov_apply_response_time(sg_policy, util); freq = map_util_freq(util, freq, max); if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update) @@ -539,8 +581,32 @@ rate_limit_us_store(struct gov_attr_set *attr_set, const char *buf, size_t count static struct governor_attr rate_limit_us = __ATTR_RW(rate_limit_us); +static ssize_t response_time_ms_show(struct gov_attr_set *attr_set, char *buf) +{ + struct sugov_tunables *tunables = to_sugov_tunables(attr_set); + + return sprintf(buf, "%u\n", tunables->response_time_ms); +} + +static ssize_t +response_time_ms_store(struct gov_attr_set *attr_set, const char *buf, size_t count) +{ + struct sugov_tunables *tunables = to_sugov_tunables(attr_set); + unsigned int response_time_ms; + + if (kstrtouint(buf, 10, &response_time_ms)) + return -EINVAL; + + tunables->response_time_ms = response_time_ms; + + return count; +} + +static struct governor_attr response_time_ms = __ATTR_RW(response_time_ms); + static struct attribute *sugov_attrs[] = { &rate_limit_us.attr, + &response_time_ms.attr, NULL }; ATTRIBUTE_GROUPS(sugov); @@ -704,6 +770,7 @@ static int sugov_init(struct cpufreq_policy *policy) } tunables->rate_limit_us = cpufreq_policy_transition_delay_us(policy); + tunables->response_time_ms = sugov_calc_freq_response_ms(sg_policy); policy->governor_data = sg_policy; sg_policy->tunables = tunables; @@ -763,7 +830,8 @@ static int sugov_start(struct cpufreq_policy *policy) void (*uu)(struct update_util_data *data, u64 time, unsigned int flags); unsigned int cpu; - sg_policy->freq_update_delay_ns = sg_policy->tunables->rate_limit_us * NSEC_PER_USEC; + sg_policy->freq_update_delay_ns = sg_policy->tunables->rate_limit_us * NSEC_PER_USEC; + sg_policy->freq_response_time_ms = sugov_calc_freq_response_ms(sg_policy); sg_policy->last_freq_update_time = 0; sg_policy->next_freq = 0; sg_policy->work_in_progress = false; From patchwork Sun Aug 27 23:32:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qais Yousef X-Patchwork-Id: 717683 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48081C83F10 for ; Sun, 27 Aug 2023 23:33:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229608AbjH0Xc6 (ORCPT ); Sun, 27 Aug 2023 19:32:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229572AbjH0XcY (ORCPT ); Sun, 27 Aug 2023 19:32:24 -0400 Received: from mail-wm1-x336.google.com (mail-wm1-x336.google.com [IPv6:2a00:1450:4864:20::336]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E60FAB5 for ; Sun, 27 Aug 2023 16:32:21 -0700 (PDT) Received: by mail-wm1-x336.google.com with SMTP id 5b1f17b1804b1-4018af1038cso25648465e9.0 for ; Sun, 27 Aug 2023 16:32:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=layalina-io.20221208.gappssmtp.com; s=20221208; t=1693179140; x=1693783940; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wFkXrdnu5Rx2v27Ln35XCpmFb3Nev2JNuwy0cpVVUik=; b=tV+JFh5yfUALbSfnvJit1Ls+W13haR2OqiehBVxoysLZ1A0LrytBIiPFjMAQbyDhk4 Bqxcrdw6O5hbx2Tw/lvMwBIfLu9V5ms7iNAPSJ7WLZyADO/vTT+TbPV0vGeUhHDVP99H jDIQEe46b44N7l3YXCD2Ldh6B89HaZl9myHyZ4jRNzRglI3NorSRF7YDonRl/640Gogz HEMD4wWvbKeBU68TJ6MmUs3Nphn/JsiklnRkfjaHIh1GI9WyP0kg3gjPby4UGptVfG5d dMP2uwS7kUL/1+/DRdgcJe1Zxpu15yQ4lL9IajJDxRdBCndh7+Lx5Tf5IjS5p1BBtr8k vMEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693179140; x=1693783940; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wFkXrdnu5Rx2v27Ln35XCpmFb3Nev2JNuwy0cpVVUik=; b=in+vjGhHjKowUh/zvvxAZURAqds1831HRI9I1pdb2X/sGtO+j3JzkJhk58JwYmiGJ9 RNG8szZ4nkWz5hJR7BLLcwBAMou6aOW+lkEWFB3CZiDb14wwD7cKBxDBmXNMIguAWJT9 MU6ivyaHBdvlnObSVB+A58Ts4XKTSxar36RbwIvkyXp8e3YFeMJVrEvkxDpBLCPKEXir g3St2oG5x521eQH9t2rB6/nJXM9MmMQgv6qxWYnWOnzVW2TK7lHuJRihgpJXhcEx09JV KvSYJ7DZgbcuhCvpU8OeavAQgAevQT53voteF8d6+WcQ+91OaumyaXeJGtN2d4c5X0FR LnUQ== X-Gm-Message-State: AOJu0YwnmJZNg4V3nE6yWzrnrsbXoPE6y5Lek9dbAprEQefxzQgldpTj Nw2u8W2tBCxw2H4qqlPIz0XECA== X-Google-Smtp-Source: AGHT+IHuAYCkgo8BfKdgfPY4aun2VqisJcq5tzm5YC1Ri/kzZxza+pgOcrBisI+Ilsf+fWbIejFWaA== X-Received: by 2002:a05:600c:2297:b0:401:b204:3b8d with SMTP id 23-20020a05600c229700b00401b2043b8dmr7055779wmf.27.1693179139958; Sun, 27 Aug 2023 16:32:19 -0700 (PDT) Received: from airbuntu.. (host109-151-228-137.range109-151.btcentralplus.com. [109.151.228.137]) by smtp.gmail.com with ESMTPSA id 21-20020a05600c029500b003fe1a96845bsm12220395wmk.2.2023.08.27.16.32.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Aug 2023 16:32:19 -0700 (PDT) From: Qais Yousef To: Ingo Molnar , Peter Zijlstra , "Rafael J. Wysocki" , Viresh Kumar , Vincent Guittot , Dietmar Eggemann Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Lukasz Luba , Qais Yousef Subject: [RFC PATCH 6/7] sched/pelt: Introduce PELT multiplier Date: Mon, 28 Aug 2023 00:32:02 +0100 Message-Id: <20230827233203.1315953-7-qyousef@layalina.io> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230827233203.1315953-1-qyousef@layalina.io> References: <20230827233203.1315953-1-qyousef@layalina.io> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Vincent Donnefort The new sched_pelt_multiplier boot param allows a user to set a clock multiplier to x2 or x4 (x1 being the default). This clock multiplier artificially speeds up PELT ramp up/down similarly to use a faster half-life than the default 32ms. - x1: 32ms half-life - x2: 16ms half-life - x4: 8ms half-life Internally, a new clock is created: rq->clock_task_mult. It sits in the clock hierarchy between rq->clock_task and rq->clock_pelt. The param is set as read only and can only be changed at boot time via kernel.sched_pelt_multiplier=[1, 2, 4] PELT has a big impact on the overall system response and reactiveness to change. Smaller PELT HF means it'll require less time to reach the maximum performance point of the system when the system become fully busy; and equally shorter time to go back to lowest performance point when the system goes back to idle. This faster reaction impacts both dvfs response and migration time between clusters in HMP system. Smaller PELT values are expected to give better performance at the cost of more power. Under powered systems can particularly benefit from smaller values. Powerful systems can still benefit from smaller values if they want to be tuned towards perf more and power is not the major concern for them. This combined with respone_time_ms from schedutil should give the user and sysadmin a deterministic way to control the triangular power, perf and thermals for their system. The default response_time_ms will half as PELT HF halves. Update approximate_{util_avg, runtime}() to take into account the PELT HALFLIFE multiplier. Signed-off-by: Vincent Donnefort Signed-off-by: Dietmar Eggemann [Converted from sysctl to boot param and updated commit message] Signed-off-by: Qais Yousef (Google) --- kernel/sched/core.c | 2 +- kernel/sched/pelt.c | 52 ++++++++++++++++++++++++++++++++++++++++++-- kernel/sched/pelt.h | 42 +++++++++++++++++++++++++++++++---- kernel/sched/sched.h | 1 + 4 files changed, 90 insertions(+), 7 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index f56eb44745a8..42ed86a6ad3c 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -745,7 +745,7 @@ static void update_rq_clock_task(struct rq *rq, s64 delta) if ((irq_delta + steal) && sched_feat(NONTASK_CAPACITY)) update_irq_load_avg(rq, irq_delta + steal); #endif - update_rq_clock_pelt(rq, delta); + update_rq_clock_task_mult(rq, delta); } void update_rq_clock(struct rq *rq) diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c index f673b9ab92dc..24886bab0f91 100644 --- a/kernel/sched/pelt.c +++ b/kernel/sched/pelt.c @@ -468,6 +468,54 @@ int update_irq_load_avg(struct rq *rq, u64 running) } #endif /* CONFIG_HAVE_SCHED_AVG_IRQ */ +__read_mostly unsigned int sched_pelt_lshift; +static unsigned int sched_pelt_multiplier = 1; + +static int set_sched_pelt_multiplier(const char *val, const struct kernel_param *kp) +{ + int ret; + + ret = param_set_int(val, kp); + if (ret) + goto error; + + switch (sched_pelt_multiplier) { + case 1: + fallthrough; + case 2: + fallthrough; + case 4: + WRITE_ONCE(sched_pelt_lshift, + sched_pelt_multiplier >> 1); + break; + default: + ret = -EINVAL; + goto error; + } + + return 0; + +error: + sched_pelt_multiplier = 1; + return ret; +} + +static const struct kernel_param_ops sched_pelt_multiplier_ops = { + .set = set_sched_pelt_multiplier, + .get = param_get_int, +}; + +#ifdef MODULE_PARAM_PREFIX +#undef MODULE_PARAM_PREFIX +#endif +/* XXX: should we use sched as prefix? */ +#define MODULE_PARAM_PREFIX "kernel." +module_param_cb(sched_pelt_multiplier, &sched_pelt_multiplier_ops, &sched_pelt_multiplier, 0444); +MODULE_PARM_DESC(sched_pelt_multiplier, "PELT HALFLIFE helps control the responsiveness of the system."); +MODULE_PARM_DESC(sched_pelt_multiplier, "Accepted value: 1 32ms PELT HALIFE - roughly 200ms to go from 0 to max performance point (default)."); +MODULE_PARM_DESC(sched_pelt_multiplier, " 2 16ms PELT HALIFE - roughly 100ms to go from 0 to max performance point."); +MODULE_PARM_DESC(sched_pelt_multiplier, " 4 8ms PELT HALIFE - roughly 50ms to go from 0 to max performance point."); + /* * Approximate the new util_avg value assuming an entity has continued to run * for @delta us. @@ -482,7 +530,7 @@ unsigned long approximate_util_avg(unsigned long util, u64 delta) if (unlikely(!delta)) return util; - accumulate_sum(delta, &sa, 0, 0, 1); + accumulate_sum(delta << sched_pelt_lshift, &sa, 0, 0, 1); ___update_load_avg(&sa, 0); return sa.util_avg; @@ -494,7 +542,7 @@ unsigned long approximate_util_avg(unsigned long util, u64 delta) u64 approximate_runtime(unsigned long util) { struct sched_avg sa = {}; - u64 delta = 1024; // period = 1024 = ~1ms + u64 delta = 1024 << sched_pelt_lshift; // period = 1024 = ~1ms u64 runtime = 0; if (unlikely(!util)) diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h index 3a0e0dc28721..9b35b5072bae 100644 --- a/kernel/sched/pelt.h +++ b/kernel/sched/pelt.h @@ -61,6 +61,14 @@ static inline void cfs_se_util_change(struct sched_avg *avg) WRITE_ONCE(avg->util_est.enqueued, enqueued); } +static inline u64 rq_clock_task_mult(struct rq *rq) +{ + lockdep_assert_rq_held(rq); + assert_clock_updated(rq); + + return rq->clock_task_mult; +} + static inline u64 rq_clock_pelt(struct rq *rq) { lockdep_assert_rq_held(rq); @@ -72,7 +80,7 @@ static inline u64 rq_clock_pelt(struct rq *rq) /* The rq is idle, we can sync to clock_task */ static inline void _update_idle_rq_clock_pelt(struct rq *rq) { - rq->clock_pelt = rq_clock_task(rq); + rq->clock_pelt = rq_clock_task_mult(rq); u64_u32_store(rq->clock_idle, rq_clock(rq)); /* Paired with smp_rmb in migrate_se_pelt_lag() */ @@ -121,6 +129,27 @@ static inline void update_rq_clock_pelt(struct rq *rq, s64 delta) rq->clock_pelt += delta; } +extern unsigned int sched_pelt_lshift; + +/* + * absolute time |1 |2 |3 |4 |5 |6 | + * @ mult = 1 --------****************--------****************- + * @ mult = 2 --------********----------------********--------- + * @ mult = 4 --------****--------------------****------------- + * clock task mult + * @ mult = 2 | | |2 |3 | | | | |5 |6 | | | + * @ mult = 4 | | | | |2|3| | | | | | | | | | |5|6| | | | | | | + * + */ +static inline void update_rq_clock_task_mult(struct rq *rq, s64 delta) +{ + delta <<= READ_ONCE(sched_pelt_lshift); + + rq->clock_task_mult += delta; + + update_rq_clock_pelt(rq, delta); +} + /* * When rq becomes idle, we have to check if it has lost idle time * because it was fully busy. A rq is fully used when the /Sum util_sum @@ -147,7 +176,7 @@ static inline void update_idle_rq_clock_pelt(struct rq *rq) * rq's clock_task. */ if (util_sum >= divider) - rq->lost_idle_time += rq_clock_task(rq) - rq->clock_pelt; + rq->lost_idle_time += rq_clock_task_mult(rq) - rq->clock_pelt; _update_idle_rq_clock_pelt(rq); } @@ -218,13 +247,18 @@ update_irq_load_avg(struct rq *rq, u64 running) return 0; } -static inline u64 rq_clock_pelt(struct rq *rq) +static inline u64 rq_clock_task_mult(struct rq *rq) { return rq_clock_task(rq); } +static inline u64 rq_clock_pelt(struct rq *rq) +{ + return rq_clock_task_mult(rq); +} + static inline void -update_rq_clock_pelt(struct rq *rq, s64 delta) { } +update_rq_clock_task_mult(struct rq *rq, s64 delta) { } static inline void update_idle_rq_clock_pelt(struct rq *rq) { } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index e06e512af192..896b6655397c 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1023,6 +1023,7 @@ struct rq { u64 clock; /* Ensure that all clocks are in the same cache line */ u64 clock_task ____cacheline_aligned; + u64 clock_task_mult; u64 clock_pelt; unsigned long lost_idle_time; u64 clock_pelt_idle; From patchwork Sun Aug 27 23:32:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qais Yousef X-Patchwork-Id: 718110 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0E35C83F11 for ; Sun, 27 Aug 2023 23:33:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229671AbjH0Xc5 (ORCPT ); Sun, 27 Aug 2023 19:32:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229608AbjH0XcZ (ORCPT ); Sun, 27 Aug 2023 19:32:25 -0400 Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com [IPv6:2a00:1450:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45D1AC4 for ; Sun, 27 Aug 2023 16:32:22 -0700 (PDT) Received: by mail-wm1-x331.google.com with SMTP id 5b1f17b1804b1-3ff5ddb4329so22023295e9.0 for ; Sun, 27 Aug 2023 16:32:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=layalina-io.20221208.gappssmtp.com; s=20221208; t=1693179141; x=1693783941; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UaVq8EKAsb338hHMLyS2kCgyppa6AioQRKa23MjmAu8=; b=beCesP5oRdgeyDfWa7pN2BKGX+3PRY1EkZrkG1qhlumaoMEQJag/9ZstjRY3+UsXtY fVWME8tM9i+C3WJko9hSvweJoZo75zgAUzKRZYSk3vXILLPmFz34iuubjP6f4aeCSfhG za305/gB1omBjRxPSVI2DVSpj4D/b6URu0bpwogbUkgk0LHKorwpE/8o01REXhO9i5e7 B4w6JqT9wjOqAAuE570oWKsGN57owczWf+GWDG92/uZGdFD0e4FWjKgluuh33w7x/ghl HNFtKJD38gn3McVrwpm9Kkg0Q2JuHlakS3F84gnJOmG4uUdm/JgzmG11XQgNvPNAp8qw Aqcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693179141; x=1693783941; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UaVq8EKAsb338hHMLyS2kCgyppa6AioQRKa23MjmAu8=; b=KatF0ZkqctGtT0p8Gn3PNE51k35clBDr/tK27eP84eSqMEQZCBpr59R9sQIoDQx7YG 1pEpFoy5E4x0w9qLF8fxur33mUW+53VKY7gNf9RRtPntsxOfQIQ9kSMjyN2AImayYZbj zLJzo6QXeR+sItBkUDa0c5y5vJ1kqWd9Mv+Zxz8+Da1FpFLZYSZ7w41I9lwQ9aebU5GA V/ju4xhgDrhYncE0WwgeOo3TQvMk8yQ76N6izX4xP2E3RqHDKurGIXlE2frRGZWe0/0o X/mq9EfcgOYMyYi4nqu0rdG0sehp5AA2NGHcLTAKM6aE39oPurIb6B0kTDr9lEapc7F7 8vug== X-Gm-Message-State: AOJu0Yydc97hDDrD9JYlC5MkwcXEYQFcQMvNKtWoKdmcoSWTjIrKpok6 yp3J7NgTZ/pAaiqgyisE0Z1zUg== X-Google-Smtp-Source: AGHT+IFTaPp/V0wKasX2zg+Lze06XxfW3j0cBoBXVY39qh55XTNU2EFQ7Qja7L7SH5UCiDOVx2K/ig== X-Received: by 2002:a05:600c:3113:b0:3ff:786:e811 with SMTP id g19-20020a05600c311300b003ff0786e811mr11291195wmo.3.1693179140871; Sun, 27 Aug 2023 16:32:20 -0700 (PDT) Received: from airbuntu.. (host109-151-228-137.range109-151.btcentralplus.com. [109.151.228.137]) by smtp.gmail.com with ESMTPSA id 21-20020a05600c029500b003fe1a96845bsm12220395wmk.2.2023.08.27.16.32.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Aug 2023 16:32:20 -0700 (PDT) From: Qais Yousef To: Ingo Molnar , Peter Zijlstra , "Rafael J. Wysocki" , Viresh Kumar , Vincent Guittot , Dietmar Eggemann Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Lukasz Luba , Qais Yousef Subject: [RFC PATCH 7/7] cpufreq: Change default transition delay to 2ms Date: Mon, 28 Aug 2023 00:32:03 +0100 Message-Id: <20230827233203.1315953-8-qyousef@layalina.io> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230827233203.1315953-1-qyousef@layalina.io> References: <20230827233203.1315953-1-qyousef@layalina.io> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org 10ms is too high for today's hardware, even low end ones. This default end up being used a lot on Arm machines at least. Pine64, mac mini and pixel 6 all end up with 10ms rate_limit_us when using schedutil, and it's too high for all of them. Change the default to 2ms which should be 'pessimistic' enough for worst case scenario, but not too high for platforms with fast DVFS hardware. Signed-off-by: Qais Yousef (Google) --- drivers/cpufreq/cpufreq.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 50bbc969ffe5..d8fc33b7f2d2 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -579,11 +579,11 @@ unsigned int cpufreq_policy_transition_delay_us(struct cpufreq_policy *policy) * for platforms where transition_latency is in milliseconds, it * ends up giving unrealistic values. * - * Cap the default transition delay to 10 ms, which seems to be + * Cap the default transition delay to 2 ms, which seems to be * a reasonable amount of time after which we should reevaluate * the frequency. */ - return min(latency * LATENCY_MULTIPLIER, (unsigned int)10000); + return min(latency * LATENCY_MULTIPLIER, (unsigned int)(2*MSEC_PER_SEC)); } return LATENCY_MULTIPLIER;