From patchwork Wed Apr 20 02:39:27 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Muckle X-Patchwork-Id: 66165 Delivered-To: patch@linaro.org Received: by 10.140.93.198 with SMTP id d64csp2202247qge; Tue, 19 Apr 2016 19:39:57 -0700 (PDT) X-Received: by 10.66.1.99 with SMTP id 3mr8754579pal.26.1461119997049; Tue, 19 Apr 2016 19:39:57 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v185si13141904pfb.245.2016.04.19.19.39.56; Tue, 19 Apr 2016 19:39:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org; dmarc=fail (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753476AbcDTCji (ORCPT + 13 others); Tue, 19 Apr 2016 22:39:38 -0400 Received: from mail-pf0-f173.google.com ([209.85.192.173]:33077 "EHLO mail-pf0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752113AbcDTCje (ORCPT ); Tue, 19 Apr 2016 22:39:34 -0400 Received: by mail-pf0-f173.google.com with SMTP id 184so13045563pff.0 for ; Tue, 19 Apr 2016 19:39:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=0FjF8ny9arZZZGIxdgAfNirfV7cbxDQYo7TNZnMMsSw=; b=MuCxFhDyBvyVJPblNLo/K4xSwyQF8nCXVno5WCIJB6+If0O9LP3+wGLoHzcgglACC/ OrgZhGJSd+m+4FtbsbKaXA6VfxtOIO6/pl2WiubN9U6jGCU0E3f3GNrvtZvPqms5i0h/ v+V5d/ywighoFMJd0JFN/kzaKFF/5cHQJiWCg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=0FjF8ny9arZZZGIxdgAfNirfV7cbxDQYo7TNZnMMsSw=; b=TZzOVDi1zaCw70EBxBv1wAOUcEmCxX9P/AL+YLhSdujbfmXuGl9FDWQTwyjvppgM2j LJy+ASi16SoiFZsjMnE+T2kcKJn/u4O7Ic+LWOpNUR/OQkSSdAAg2M2VAP+N152crflo QVzoqkvbzztkkn4WABAHmaKB9/FbGTumFiqs1OzFmWlXCUDhQtMsgE6cyJfHvZO7jScj h4wWaF+yJD7n/oUZRg464oQ7rlvPhdtY/QbfQd5sCi3NFwzjYrNbKPFiItLg4UfENIay 6m9MhMiScO8BT+kaVl1EbgSuoOfTpDShkocNhG0OP6Zhc/zXz566XLcpO5zt30zcaWl2 WaUQ== X-Gm-Message-State: AOPr4FX8Ukh8eJfMgXHTt3w89hEkw1qPVqOYpEo14SHZMm+IGUnEHIp7pSjvn0GVl1DbtSG/ X-Received: by 10.98.68.198 with SMTP id m67mr8869929pfi.161.1461119973405; Tue, 19 Apr 2016 19:39:33 -0700 (PDT) Received: from graphite.smuckle.net (cpe-76-167-105-107.san.res.rr.com. [76.167.105.107]) by smtp.gmail.com with ESMTPSA id 9sm93833526pfm.10.2016.04.19.19.39.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 19 Apr 2016 19:39:33 -0700 (PDT) From: Steve Muckle X-Google-Original-From: Steve Muckle To: "Rafael J. Wysocki" , Viresh Kumar Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Peter Zijlstra , Ingo Molnar , Vincent Guittot , Morten Rasmussen , Dietmar Eggemann , Juri Lelli , Patrick Bellasi , Michael Turquette Subject: [RFC PATCH 2/4] cpufreq: schedutil: support scheduler cpufreq callbacks on remote CPUs Date: Tue, 19 Apr 2016 19:39:27 -0700 Message-Id: <1461119969-10371-2-git-send-email-smuckle@linaro.org> X-Mailer: git-send-email 2.4.10 In-Reply-To: <1461119969-10371-1-git-send-email-smuckle@linaro.org> References: <1461119969-10371-1-git-send-email-smuckle@linaro.org> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org In preparation for the scheduler cpufreq callback happening on remote CPUs, add support for this in schedutil. Schedutil requires the callback occur on the CPU being updated in order to support fast frequency switches. Signed-off-by: Steve Muckle --- kernel/sched/cpufreq_schedutil.c | 90 ++++++++++++++++++++++++++++++---------- 1 file changed, 68 insertions(+), 22 deletions(-) -- 2.4.10 -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 154ae3a51e86..6e7cf90d4ea7 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -49,6 +49,8 @@ struct sugov_cpu { unsigned long util; unsigned long max; u64 last_update; + + int cpu; }; static DEFINE_PER_CPU(struct sugov_cpu, sugov_cpu); @@ -76,27 +78,59 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time) return delta_ns >= sg_policy->freq_update_delay_ns; } -static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time, +static void sugov_fast_switch(struct sugov_policy *sg_policy, + unsigned int next_freq) +{ + struct cpufreq_policy *policy = sg_policy->policy; + + next_freq = cpufreq_driver_fast_switch(policy, next_freq); + if (next_freq == CPUFREQ_ENTRY_INVALID) + return; + + policy->cur = next_freq; + trace_cpu_frequency(next_freq, smp_processor_id()); +} + +#ifdef CONFIG_SMP +static inline bool sugov_queue_remote_callback(struct sugov_policy *sg_policy, + int cpu) +{ + if (cpu != smp_processor_id()) { + sg_policy->work_in_progress = true; + irq_work_queue_on(&sg_policy->irq_work, cpu); + return true; + } + + return false; +} +#else +static inline bool sugov_queue_remote_callback(struct sugov_policy *sg_policy, + int cpu) +{ + return false; +} +#endif + +static void sugov_update_commit(struct sugov_cpu *sg_cpu, u64 time, unsigned int next_freq) { + struct sugov_policy *sg_policy = sg_cpu->sg_policy; struct cpufreq_policy *policy = sg_policy->policy; sg_policy->last_freq_update_time = time; + if (sg_policy->next_freq == next_freq) { + trace_cpu_frequency(policy->cur, sg_cpu->cpu); + return; + } + sg_policy->next_freq = next_freq; + + if (sugov_queue_remote_callback(sg_policy, sg_cpu->cpu)) + return; + if (policy->fast_switch_enabled) { - if (sg_policy->next_freq == next_freq) { - trace_cpu_frequency(policy->cur, smp_processor_id()); - return; - } - sg_policy->next_freq = next_freq; - next_freq = cpufreq_driver_fast_switch(policy, next_freq); - if (next_freq == CPUFREQ_ENTRY_INVALID) - return; - - policy->cur = next_freq; - trace_cpu_frequency(next_freq, smp_processor_id()); - } else if (sg_policy->next_freq != next_freq) { - sg_policy->next_freq = next_freq; + sugov_fast_switch(sg_policy, next_freq); + } else { sg_policy->work_in_progress = true; irq_work_queue(&sg_policy->irq_work); } @@ -142,12 +176,13 @@ static void sugov_update_single(struct update_util_data *hook, u64 time, next_f = util == ULONG_MAX ? policy->cpuinfo.max_freq : get_next_freq(policy, util, max); - sugov_update_commit(sg_policy, time, next_f); + sugov_update_commit(sg_cpu, time, next_f); } -static unsigned int sugov_next_freq_shared(struct sugov_policy *sg_policy, +static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, unsigned long util, unsigned long max) { + struct sugov_policy *sg_policy = sg_cpu->sg_policy; struct cpufreq_policy *policy = sg_policy->policy; unsigned int max_f = policy->cpuinfo.max_freq; u64 last_freq_update_time = sg_policy->last_freq_update_time; @@ -161,10 +196,10 @@ static unsigned int sugov_next_freq_shared(struct sugov_policy *sg_policy, unsigned long j_util, j_max; s64 delta_ns; - if (j == smp_processor_id()) + j_sg_cpu = &per_cpu(sugov_cpu, j); + if (j_sg_cpu == sg_cpu) continue; - j_sg_cpu = &per_cpu(sugov_cpu, j); /* * If the CPU utilization was last updated before the previous * frequency update and the time elapsed between the last update @@ -204,8 +239,8 @@ static void sugov_update_shared(struct update_util_data *hook, u64 time, sg_cpu->last_update = time; if (sugov_should_update_freq(sg_policy, time)) { - next_f = sugov_next_freq_shared(sg_policy, util, max); - sugov_update_commit(sg_policy, time, next_f); + next_f = sugov_next_freq_shared(sg_cpu, util, max); + sugov_update_commit(sg_cpu, time, next_f); } raw_spin_unlock(&sg_policy->update_lock); @@ -226,9 +261,17 @@ static void sugov_work(struct work_struct *work) static void sugov_irq_work(struct irq_work *irq_work) { struct sugov_policy *sg_policy; + struct cpufreq_policy *policy; sg_policy = container_of(irq_work, struct sugov_policy, irq_work); - schedule_work_on(smp_processor_id(), &sg_policy->work); + policy = sg_policy->policy; + + if (policy->fast_switch_enabled) { + sugov_fast_switch(sg_policy, sg_policy->next_freq); + sg_policy->work_in_progress = false; + } else { + schedule_work_on(smp_processor_id(), &sg_policy->work); + } } /************************** sysfs interface ************************/ @@ -330,7 +373,7 @@ static int sugov_init(struct cpufreq_policy *policy) struct sugov_policy *sg_policy; struct sugov_tunables *tunables; unsigned int lat; - int ret = 0; + int cpu, ret = 0; /* State should be equivalent to EXIT */ if (policy->governor_data) @@ -340,6 +383,9 @@ static int sugov_init(struct cpufreq_policy *policy) if (!sg_policy) return -ENOMEM; + for_each_cpu(cpu, policy->cpus) + per_cpu(sugov_cpu, cpu).cpu = cpu; + mutex_lock(&global_tunables_lock); if (global_tunables) {