From patchwork Wed Jul 26 09:22:32 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 108739 Delivered-To: patch@linaro.org Received: by 10.140.101.44 with SMTP id t41csp572967qge; Wed, 26 Jul 2017 02:23:40 -0700 (PDT) X-Received: by 10.84.231.198 with SMTP id g6mr320545pln.110.1501061020250; Wed, 26 Jul 2017 02:23:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1501061020; cv=none; d=google.com; s=arc-20160816; b=B1rsrJZbC/UyfZsg6dlI4ME8uwMbfqS2wbSBrMWnBI35uxbiKi793xmNcOWOmeA4tf KB0ouj8xWw8ink0MAcxvW/wnPdc/A/TLu0KDUcNyuu5/QAEg5XttKYTZ7uWicziJIfAk SLa18btwtBZ97jvYWYqHbOPQp3aJc4Tj4KQLdauaw6Gjx/hsgm6RtIdw7LzIuh+I5HiH WVy872xImGhjW+/30qZVODsPrPgsJkpKnUkY8VOhfDZAi9FnNzHuk5mOnpCD5frctSF1 AUw9Ji07mJ3+5eIc1YUMKDFAXs7UD7V+2WYU0ypaOgT2A5T6gEAX2WOBtQbP28QwHiCz 0UKw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=GKPCGAlE/KXREGT7fLWH/Y5WMH2Bt8ptMeFR4CCQe98=; b=T2T/vxIHN2cJaRRWjzk7GdwvGB5fF/bWs7GjQVQmWNi/ouIVdxih5rljo5ze9GjokF +c/BAYklTDgk33+Ny33vkuCpocKE2Gqrn4uKbdGz+li1unNDjtKFnSMInX+YIXKVmaZ2 EwpRFYdMeSGSLfTtgYY4wf4JZpAPFmBOg0fOIRWEhiYMePHjB0Ju4AMgbquqDIKZbsv2 wglTvmV4/vm+BbiiPSwOKS6cU1XdNPS/0tyYQiqdAiFNtPoOA+SwMYMrPECijeIAWlxD aq4++BaFl3W3h62pfZx0mrpAGLq5fc254enuyIzXblmwbLrfcs1d3uUDnh/y4+raEbXm c4eg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.b=Cbea9axy; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h13si10025749pln.4.2017.07.26.02.23.39; Wed, 26 Jul 2017 02:23:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.b=Cbea9axy; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751502AbdGZJXg (ORCPT + 26 others); Wed, 26 Jul 2017 05:23:36 -0400 Received: from mail-pf0-f174.google.com ([209.85.192.174]:35503 "EHLO mail-pf0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751413AbdGZJXc (ORCPT ); Wed, 26 Jul 2017 05:23:32 -0400 Received: by mail-pf0-f174.google.com with SMTP id h29so33751399pfd.2 for ; Wed, 26 Jul 2017 02:23:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=GKPCGAlE/KXREGT7fLWH/Y5WMH2Bt8ptMeFR4CCQe98=; b=Cbea9axycyXpbGZ+AhVOGtwdTbrB6JBzXah03DI0PLQebDp6CO/Nflv6j1mLgjspAj ozH9/pNJGX5RkpYgloX2+BWoLNcNp62BPUmP8JQj3Ggu+JP371moyJATB6xQoCN7P1In XMGe42aSM8pufyh46gzZp1b2wh2dCVAOVTcxw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=GKPCGAlE/KXREGT7fLWH/Y5WMH2Bt8ptMeFR4CCQe98=; b=nmI6516oYC6+ficeoE0RakrobuVYJub6O5VHaasMZRelT9JNAZ1XpyTJU8CxfrmigT PODR+4EzrhRmaR4IcYnWic0byhE5dgRAAqeLbmYIz+e+DSK782lDj9oAGi6pCSETZBNE H0N0ynzt69FODrxT0X881RAsS/nxyi2MIKYCk8zkXQiX8akQfDUV5VL37+0b2koA3Hq/ dEAbaR7gOyTJzLqLaM368OFYQXQ0rpfLBqRE1EF0KqZmWQgF1Qs08CE4tB3K8ebY60az A2uywqHiPsOE8bTtVoAHmal8jiOlo03epkyStUcGeaTV8gXoFjmmqaNQ3XXGs8SC8CZy e+aA== X-Gm-Message-State: AIVw1110ywPsJVW2iRRY5PWE+SsIbNb0YwjK7z6epAyKgc6rqILw1qXq m3yvH3BETcACK/Kx X-Received: by 10.84.130.108 with SMTP id 99mr308686plc.76.1501061011775; Wed, 26 Jul 2017 02:23:31 -0700 (PDT) Received: from localhost ([122.171.79.89]) by smtp.gmail.com with ESMTPSA id 124sm31923469pgi.62.2017.07.26.02.23.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 26 Jul 2017 02:23:31 -0700 (PDT) From: Viresh Kumar To: Rafael Wysocki , Viresh Kumar , Srinivas Pandruvada , Len Brown , Ingo Molnar , Peter Zijlstra Cc: linux-pm@vger.kernel.org, Vincent Guittot , smuckle.linux@gmail.com, juri.lelli@arm.com, Morten.Rasmussen@arm.com, patrick.bellasi@arm.com, eas-dev@lists.linaro.org, linux-kernel@vger.kernel.org Subject: [PATCH V4 1/3] sched: cpufreq: Allow remote cpufreq callbacks Date: Wed, 26 Jul 2017 14:52:32 +0530 Message-Id: <8797d4993baa6580e3af741d081be492032ce9dd.1501060871.git.viresh.kumar@linaro.org> X-Mailer: git-send-email 2.13.0.71.gd7076ec9c9cb In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We do not call cpufreq callbacks from scheduler core for remote (non-local) CPUs currently. But there are cases where such remote callbacks are useful, specially in the case of shared cpufreq policies. This patch updates the scheduler core to call the cpufreq callbacks for remote CPUs as well. For now, all the registered utilization update callbacks are updated to return early if remote callback is detected. That is, this patch just moves the decision making down in the hierarchy. Later patches would enable remote callbacks for shared policies. Based on initial work from Steve Muckle. Signed-off-by: Viresh Kumar --- drivers/cpufreq/cpufreq_governor.c | 4 ++++ drivers/cpufreq/intel_pstate.c | 8 ++++++++ include/linux/sched/cpufreq.h | 1 + kernel/sched/cpufreq.c | 1 + kernel/sched/cpufreq_schedutil.c | 11 ++++++++--- kernel/sched/deadline.c | 2 +- kernel/sched/fair.c | 8 +++++--- kernel/sched/rt.c | 2 +- kernel/sched/sched.h | 10 ++-------- 9 files changed, 31 insertions(+), 16 deletions(-) -- 2.13.0.71.gd7076ec9c9cb diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c index eed069ecfd5e..5499796cf9a8 100644 --- a/drivers/cpufreq/cpufreq_governor.c +++ b/drivers/cpufreq/cpufreq_governor.c @@ -272,6 +272,10 @@ static void dbs_update_util_handler(struct update_util_data *data, u64 time, struct policy_dbs_info *policy_dbs = cdbs->policy_dbs; u64 delta_ns, lst; + /* Don't allow remote callbacks */ + if (smp_processor_id() != data->cpu) + return; + /* * The work may not be allowed to be queued up right now. * Possible reasons: diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c index 89bbc0c11b22..0dd14c8edd2d 100644 --- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c @@ -1747,6 +1747,10 @@ static void intel_pstate_update_util_pid(struct update_util_data *data, struct cpudata *cpu = container_of(data, struct cpudata, update_util); u64 delta_ns = time - cpu->sample.time; + /* Don't allow remote callbacks */ + if (smp_processor_id() != data->cpu) + return; + if ((s64)delta_ns < pid_params.sample_rate_ns) return; @@ -1764,6 +1768,10 @@ static void intel_pstate_update_util(struct update_util_data *data, u64 time, struct cpudata *cpu = container_of(data, struct cpudata, update_util); u64 delta_ns; + /* Don't allow remote callbacks */ + if (smp_processor_id() != data->cpu) + return; + if (flags & SCHED_CPUFREQ_IOWAIT) { cpu->iowait_boost = int_tofp(1); } else if (cpu->iowait_boost) { diff --git a/include/linux/sched/cpufreq.h b/include/linux/sched/cpufreq.h index d2be2ccbb372..8256a8f35f22 100644 --- a/include/linux/sched/cpufreq.h +++ b/include/linux/sched/cpufreq.h @@ -16,6 +16,7 @@ #ifdef CONFIG_CPU_FREQ struct update_util_data { void (*func)(struct update_util_data *data, u64 time, unsigned int flags); + unsigned int cpu; }; void cpufreq_add_update_util_hook(int cpu, struct update_util_data *data, diff --git a/kernel/sched/cpufreq.c b/kernel/sched/cpufreq.c index dbc51442ecbc..ee4c596b71b4 100644 --- a/kernel/sched/cpufreq.c +++ b/kernel/sched/cpufreq.c @@ -42,6 +42,7 @@ void cpufreq_add_update_util_hook(int cpu, struct update_util_data *data, return; data->func = func; + data->cpu = cpu; rcu_assign_pointer(per_cpu(cpufreq_update_util_data, cpu), data); } EXPORT_SYMBOL_GPL(cpufreq_add_update_util_hook); diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 45fcf21ad685..bb834747e49b 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -72,10 +72,15 @@ static DEFINE_PER_CPU(struct sugov_cpu, sugov_cpu); /************************ Governor internals ***********************/ -static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time) +static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time, + int target_cpu) { s64 delta_ns; + /* Don't allow remote callbacks */ + if (smp_processor_id() != target_cpu) + return false; + if (sg_policy->work_in_progress) return false; @@ -221,7 +226,7 @@ static void sugov_update_single(struct update_util_data *hook, u64 time, sugov_set_iowait_boost(sg_cpu, time, flags); sg_cpu->last_update = time; - if (!sugov_should_update_freq(sg_policy, time)) + if (!sugov_should_update_freq(sg_policy, time, hook->cpu)) return; busy = sugov_cpu_is_busy(sg_cpu); @@ -301,7 +306,7 @@ static void sugov_update_shared(struct update_util_data *hook, u64 time, sugov_set_iowait_boost(sg_cpu, time, flags); sg_cpu->last_update = time; - if (sugov_should_update_freq(sg_policy, time)) { + if (sugov_should_update_freq(sg_policy, time, hook->cpu)) { if (flags & SCHED_CPUFREQ_RT_DL) next_f = sg_policy->policy->cpuinfo.max_freq; else diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 755bd3f1a1a9..5c3bf4bd0327 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1136,7 +1136,7 @@ static void update_curr_dl(struct rq *rq) } /* kick cpufreq (see the comment in kernel/sched/sched.h). */ - cpufreq_update_this_cpu(rq, SCHED_CPUFREQ_DL); + cpufreq_update_util(rq, SCHED_CPUFREQ_DL); schedstat_set(curr->se.statistics.exec_max, max(curr->se.statistics.exec_max, delta_exec)); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c95880e216f6..d378d02fdfcb 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3278,7 +3278,9 @@ static inline void set_tg_cfs_propagate(struct cfs_rq *cfs_rq) {} static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq) { - if (&this_rq()->cfs == cfs_rq) { + struct rq *rq = rq_of(cfs_rq); + + if (&rq->cfs == cfs_rq) { /* * There are a few boundary cases this might miss but it should * get called often enough that that should (hopefully) not be @@ -3295,7 +3297,7 @@ static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq) * * See cpu_util(). */ - cpufreq_update_util(rq_of(cfs_rq), 0); + cpufreq_update_util(rq, 0); } } @@ -4875,7 +4877,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) * passed. */ if (p->in_iowait) - cpufreq_update_this_cpu(rq, SCHED_CPUFREQ_IOWAIT); + cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT); for_each_sched_entity(se) { if (se->on_rq) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 45caf937ef90..0af5ca9e3e3f 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -970,7 +970,7 @@ static void update_curr_rt(struct rq *rq) return; /* Kick cpufreq (see the comment in kernel/sched/sched.h). */ - cpufreq_update_this_cpu(rq, SCHED_CPUFREQ_RT); + cpufreq_update_util(rq, SCHED_CPUFREQ_RT); schedstat_set(curr->se.statistics.exec_max, max(curr->se.statistics.exec_max, delta_exec)); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index eeef1a3086d1..aa9d5b87b4f8 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2070,19 +2070,13 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) { struct update_util_data *data; - data = rcu_dereference_sched(*this_cpu_ptr(&cpufreq_update_util_data)); + data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data, + cpu_of(rq))); if (data) data->func(data, rq_clock(rq), flags); } - -static inline void cpufreq_update_this_cpu(struct rq *rq, unsigned int flags) -{ - if (cpu_of(rq) == smp_processor_id()) - cpufreq_update_util(rq, flags); -} #else static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {} -static inline void cpufreq_update_this_cpu(struct rq *rq, unsigned int flags) {} #endif /* CONFIG_CPU_FREQ */ #ifdef arch_scale_freq_capacity