From patchwork Thu Oct 27 17:41:05 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 79770 Delivered-To: patch@linaro.org Received: by 10.140.97.247 with SMTP id m110csp755405qge; Thu, 27 Oct 2016 10:42:52 -0700 (PDT) X-Received: by 10.98.19.208 with SMTP id 77mr16912395pft.102.1477590172645; Thu, 27 Oct 2016 10:42:52 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m17si8925058pgj.256.2016.10.27.10.42.52; Thu, 27 Oct 2016 10:42:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1030202AbcJ0Rmt (ORCPT + 27 others); Thu, 27 Oct 2016 13:42:49 -0400 Received: from foss.arm.com ([217.140.101.70]:43300 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S942219AbcJ0Rlf (ORCPT ); Thu, 27 Oct 2016 13:41:35 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 283FA1509; Thu, 27 Oct 2016 10:41:30 -0700 (PDT) Received: from e105326-lin.cambridge.arm.com (e105326-lin.cambridge.arm.com [10.1.210.55]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 198453F218; Thu, 27 Oct 2016 10:41:26 -0700 (PDT) From: Patrick Bellasi To: linux-kernel@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Vincent Guittot , Steve Muckle , Leo Yan , Viresh Kumar , "Rafael J . Wysocki" , Todd Kjos , Srinath Sridharan , Andres Oportus , Juri Lelli , Morten Rasmussen , Dietmar Eggemann , Chris Redpath , Robin Randhawa , Patrick Bellasi , Tejun Heo , Li Zefan , Johannes Weiner , Ingo Molnar Subject: [RFC v2 5/8] sched/tune: add initial support for CGroups based boosting Date: Thu, 27 Oct 2016 18:41:05 +0100 Message-Id: <20161027174108.31139-6-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.10.1 In-Reply-To: <20161027174108.31139-1-patrick.bellasi@arm.com> References: <20161027174108.31139-1-patrick.bellasi@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org To support task performance boosting, the usage of a single knob has the advantage to be a simple solution, both from the implementation and the usability standpoint. However, on a real system it can be difficult to identify a single value for the knob which fits the needs of multiple different tasks. For example, some kernel threads and/or user-space background services should be better managed the "standard" way while we still want to be able to boost the performance of specific workloads. In order to improve the flexibility of the task boosting mechanism this patch is the first of a small series which extends the previous implementation to introduce a "per task group" support. This first patch introduces just the basic CGroups support, a new "schedtune" CGroups controller is added which allows to configure different boost value for different groups of tasks. To keep the implementation simple while still supporting an effective boosting strategy, the new controller: 1. allows only a two layer hierarchy 2. supports only a limited number of boost groups A two layer hierarchy allows to place each task either: a) in the root control group thus being subject to a system-wide boosting value b) in a child of the root group thus being subject to the specific boost value defined by that "boost group" The limited number of "boost groups" supported is mainly motivated by the observation that in a real system it could be useful to have only few classes of tasks which deserve different treatment. For example, background vs foreground or interactive vs low-priority. As an additional benefit, a limited number of boost groups allows also to have a simpler implementation, especially for the code required to compute the boost value for CPUs which have RUNNABLE tasks belonging to different boost groups. Cc: Tejun Heo Cc: Li Zefan Cc: Johannes Weiner Cc: Ingo Molnar Cc: Peter Zijlstra Signed-off-by: Patrick Bellasi --- include/linux/cgroup_subsys.h | 4 + init/Kconfig | 42 ++++++++ kernel/sched/tune.c | 233 ++++++++++++++++++++++++++++++++++++++++++ kernel/sysctl.c | 4 + 4 files changed, 283 insertions(+) -- 2.10.1 diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h index 0df0336a..4fd0f82 100644 --- a/include/linux/cgroup_subsys.h +++ b/include/linux/cgroup_subsys.h @@ -20,6 +20,10 @@ SUBSYS(cpu) SUBSYS(cpuacct) #endif +#if IS_ENABLED(CONFIG_CGROUP_SCHED_TUNE) +SUBSYS(schedtune) +#endif + #if IS_ENABLED(CONFIG_BLK_CGROUP) SUBSYS(io) #endif diff --git a/init/Kconfig b/init/Kconfig index 461e052..5bce1ef 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1074,6 +1074,48 @@ config RT_GROUP_SCHED endif #CGROUP_SCHED +config CGROUP_SCHED_TUNE + bool "Tasks boosting controller" + depends on SCHED_TUNE + help + This option allows users to define boost values for groups of + SCHED_OTHER tasks. Once enabled, the utilization of a CPU is boosted + by a factor proportional to the maximum boost value of all the tasks + RUNNABLE on that CPU. + + This new controller: + 1. allows only a two layers hierarchy, where the root defines the + system-wide boost value and its children define a + "boost group" whose tasks will be boosted with the configured + value. + 2. supports only a limited number of different boost groups, each + one which could be configured with a different boost value. + + Say N if unsure. + +config SCHED_TUNE_BOOSTGROUPS + int "Maximum number of SchedTune's boost groups" + range 2 16 + default 5 + depends on CGROUP_SCHED_TUNE + + help + When per-task boosting is used we still allow only limited number of + boost groups for two main reasons: + 1. on a real system we usually have only few classes of workloads which + make sense to boost with different values, + e.g. background vs foreground tasks, interactive vs low-priority tasks + 2. a limited number allows for a simpler and more memory/time efficient + implementation especially for the computation of the per-CPU boost + value + + NOTE: The first boost group is reserved to defined the global boosting to be + applied to all tasks, thus the minimum number of boost groups is 2. + Indeed, if only global boosting is required than per-task boosting is + not required and this support can be disabled. + + Use the default value (5) is unsure. + config CGROUP_PIDS bool "PIDs controller" help diff --git a/kernel/sched/tune.c b/kernel/sched/tune.c index c28a06f..4eaea1d 100644 --- a/kernel/sched/tune.c +++ b/kernel/sched/tune.c @@ -4,11 +4,239 @@ * Copyright (C) 2016 ARM Ltd, Patrick Bellasi */ +#include +#include +#include +#include + #include "sched.h" #include "tune.h" unsigned int sysctl_sched_cfs_boost __read_mostly; +#ifdef CONFIG_CGROUP_SCHED_TUNE + +/* + * CFS Scheduler Tunables for Task Groups. + */ + +/* SchedTune tunables for a group of tasks */ +struct schedtune { + /* SchedTune CGroup subsystem */ + struct cgroup_subsys_state css; + + /* Boost group allocated ID */ + int idx; + + /* Boost value for tasks on that SchedTune CGroup */ + unsigned int boost; + +}; + +static inline struct schedtune *css_st(struct cgroup_subsys_state *css) +{ + return css ? container_of(css, struct schedtune, css) : NULL; +} + +static inline struct schedtune *task_schedtune(struct task_struct *tsk) +{ + return css_st(task_css(tsk, schedtune_cgrp_id)); +} + +static inline struct schedtune *parent_st(struct schedtune *st) +{ + return css_st(st->css.parent); +} + +/* + * SchedTune root control group + * The root control group is used to define a system-wide boosting tuning, + * which is applied to all tasks in the system. + * Task specific boost tuning could be specified by creating and + * configuring a child control group under the root one. + * By default, system-wide boosting is disabled, i.e. no boosting is applied + * to tasks which are not into a child control group. + */ +static struct schedtune +root_schedtune = { + .boost = 0, +}; + +/* + * Maximum number of boost groups to support + * When per-task boosting is used we still allow only limited number of + * boost groups for two main reasons: + * 1. on a real system we usually have only few classes of workloads which + * make sense to boost with different values (e.g. background vs foreground + * tasks, interactive vs low-priority tasks) + * 2. a limited number allows for a simpler and more memory/time efficient + * implementation especially for the computation of the per-CPU boost + * value + */ +#define boostgroups_max CONFIG_SCHED_TUNE_BOOSTGROUPS + +/* Array of configured boostgroups */ +static struct schedtune *allocated_group[boostgroups_max] = { + &root_schedtune, + NULL, +}; + +/* SchedTune boost groups + * Keep track of all the boost groups which impact on CPU, for example when a + * CPU has two RUNNABLE tasks belonging to two different boost groups and thus + * likely with different boost values. Since the maximum number of boost + * groups is limited by CONFIG_SCHED_TUNE_BOOSTGROUPS, which is limited to 16, + * we use a simple array to keep track of the metrics required to compute the + * maximum per-CPU + * boosting value. + */ +struct boost_groups { + /* Maximum boost value for all RUNNABLE tasks on a CPU */ + unsigned int boost_max; + struct { + /* The boost for tasks on that boost group */ + unsigned int boost; + /* Count of RUNNABLE tasks on that boost group */ + unsigned int tasks; + } group[boostgroups_max]; +}; + +/* Boost groups affecting each CPU in the system */ +DEFINE_PER_CPU(struct boost_groups, cpu_boost_groups); + +static u64 +boost_read(struct cgroup_subsys_state *css, struct cftype *cft) +{ + struct schedtune *st = css_st(css); + + return st->boost; +} + +static int +boost_write(struct cgroup_subsys_state *css, struct cftype *cft, + u64 boost) +{ + struct schedtune *st = css_st(css); + + if (boost > 100) + return -EINVAL; + st->boost = boost; + if (css == &root_schedtune.css) + sysctl_sched_cfs_boost = boost; + return 0; +} + +static struct cftype files[] = { + { + .name = "boost", + .read_u64 = boost_read, + .write_u64 = boost_write, + }, + { } /* terminate */ +}; + +static int +schedtune_boostgroup_init(struct schedtune *st) +{ + struct boost_groups *bg; + int cpu; + + /* Keep track of allocated boost groups */ + allocated_group[st->idx] = st; + + /* Initialize the per CPU boost groups */ + for_each_possible_cpu(cpu) { + bg = &per_cpu(cpu_boost_groups, cpu); + bg->group[st->idx].boost = 0; + bg->group[st->idx].tasks = 0; + } + + return 0; +} + +static struct cgroup_subsys_state * +schedtune_css_alloc(struct cgroup_subsys_state *parent_css) +{ + struct schedtune *st; + int idx; + + if (!parent_css) + return &root_schedtune.css; + + /* Allow only single level hierachies */ + if (parent_css != &root_schedtune.css) { + pr_err("Nested SchedTune boosting groups not allowed\n"); + return ERR_PTR(-ENOMEM); + } + + /* Allow only a limited number of boosting groups */ + for (idx = 1; idx < boostgroups_max; ++idx) + if (!allocated_group[idx]) + break; + if (idx == boostgroups_max) { + pr_err("Trying to create more than %d SchedTune boosting groups\n", + boostgroups_max); + return ERR_PTR(-ENOSPC); + } + + st = kzalloc(sizeof(*st), GFP_KERNEL); + if (!st) + goto out; + + /* Initialize per CPUs boost group support */ + st->idx = idx; + if (schedtune_boostgroup_init(st)) + goto release; + + return &st->css; + +release: + kfree(st); +out: + return ERR_PTR(-ENOMEM); +} + +static void +schedtune_boostgroup_release(struct schedtune *st) +{ + /* Keep track of allocated boost groups */ + allocated_group[st->idx] = NULL; +} + +static void +schedtune_css_free(struct cgroup_subsys_state *css) +{ + struct schedtune *st = css_st(css); + + schedtune_boostgroup_release(st); + kfree(st); +} + +struct cgroup_subsys schedtune_cgrp_subsys = { + .css_alloc = schedtune_css_alloc, + .css_free = schedtune_css_free, + .legacy_cftypes = files, + .early_init = 1, +}; + +static inline void +schedtune_init_cgroups(void) +{ + struct boost_groups *bg; + int cpu; + + /* Initialize the per CPU boost groups */ + for_each_possible_cpu(cpu) { + bg = &per_cpu(cpu_boost_groups, cpu); + memset(bg, 0, sizeof(struct boost_groups)); + } + + pr_info("schedtune: configured to support %d boost groups\n", + boostgroups_max); +} + +#endif /* CONFIG_CGROUP_SCHED_TUNE */ + int sysctl_sched_cfs_boost_handler(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, @@ -26,6 +254,11 @@ static int schedtune_init(void) { schedtune_spc_rdiv = reciprocal_value(100); +#ifdef CONFIG_CGROUP_SCHED_TUNE + schedtune_init_cgroups(); +#else + pr_info("schedtune: configured to support global boosting only\n"); +#endif return 0; } late_initcall(schedtune_init); diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 43b6d14..12c3432 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -447,7 +447,11 @@ static struct ctl_table kern_table[] = { .procname = "sched_cfs_boost", .data = &sysctl_sched_cfs_boost, .maxlen = sizeof(sysctl_sched_cfs_boost), +#ifdef CONFIG_CGROUP_SCHED_TUNE + .mode = 0444, +#else .mode = 0644, +#endif .proc_handler = &sysctl_sched_cfs_boost_handler, .extra1 = &zero, .extra2 = &one_hundred,