From patchwork Tue Aug 26 11:06:54 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 35988 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ob0-f200.google.com (mail-ob0-f200.google.com [209.85.214.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 9C9D920551 for ; Tue, 26 Aug 2014 11:08:59 +0000 (UTC) Received: by mail-ob0-f200.google.com with SMTP id va2sf69824210obc.7 for ; Tue, 26 Aug 2014 04:08:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=0zJVQWKwAqMSaRrQ0440kJ57KT0Va/7ViXK+jB1qWoE=; b=ItvhU9bxFW29xJZq8QiNW9Z3bSzNY8DzPCpzzdMWHlL00SAL0/JBeZJprqkJmt/p1D aiN3+/uUe2nvJ0U389VxeJUE/TQYc9K4pdoQe0l+9u6OfuJE9+56hvSVY+CLpMJWfgVz yuHma8p/26V1m9HrlAZFWMS3mqlXe3e92VfdM4byBjF1GB/5kXtb05g5RILt8mZiY6IK oETVZX+KO5isxl+8UTWlEMo44cKzekoC0z5CvhOlE2g29QOJRwXjROanv7CIp6mbKnSw KfpOMWksLresZBQRQwIZWR2DOrd3e8o9ZZ0NLktPW5FzkshFgulzgQ6XxfddpjeZm6Da S31g== X-Gm-Message-State: ALoCoQnzt6qi4jVecZyeJsytY457Wqjq0JMSFpjBfBujnkjFPEEOm8tI2syVbKlZwCzx9vYQzkYf X-Received: by 10.42.82.79 with SMTP id c15mr20548371icl.7.1409051339245; Tue, 26 Aug 2014 04:08:59 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.32.138 with SMTP id h10ls1031558qgh.87.gmail; Tue, 26 Aug 2014 04:08:59 -0700 (PDT) X-Received: by 10.220.105.201 with SMTP id u9mr23323237vco.11.1409051339126; Tue, 26 Aug 2014 04:08:59 -0700 (PDT) Received: from mail-vc0-f179.google.com (mail-vc0-f179.google.com [209.85.220.179]) by mx.google.com with ESMTPS id i9si1220139vca.73.2014.08.26.04.08.59 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 26 Aug 2014 04:08:59 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.179 as permitted sender) client-ip=209.85.220.179; Received: by mail-vc0-f179.google.com with SMTP id hq11so16762509vcb.10 for ; Tue, 26 Aug 2014 04:08:59 -0700 (PDT) X-Received: by 10.220.194.130 with SMTP id dy2mr438896vcb.47.1409051339030; Tue, 26 Aug 2014 04:08:59 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.45.67 with SMTP id uj3csp189111vcb; Tue, 26 Aug 2014 04:08:58 -0700 (PDT) X-Received: by 10.68.139.74 with SMTP id qw10mr36323506pbb.100.1409051338038; Tue, 26 Aug 2014 04:08:58 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ds5si3536736pbb.207.2014.08.26.04.08.57 for ; Tue, 26 Aug 2014 04:08:58 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757743AbaHZLIa (ORCPT + 26 others); Tue, 26 Aug 2014 07:08:30 -0400 Received: from mail-we0-f170.google.com ([74.125.82.170]:59294 "EHLO mail-we0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757680AbaHZLIG (ORCPT ); Tue, 26 Aug 2014 07:08:06 -0400 Received: by mail-we0-f170.google.com with SMTP id w62so14606052wes.15 for ; Tue, 26 Aug 2014 04:08:04 -0700 (PDT) X-Received: by 10.194.63.37 with SMTP id d5mr15419403wjs.92.1409051284658; Tue, 26 Aug 2014 04:08:04 -0700 (PDT) Received: from lmenx30s.lme.st.com (LPuteaux-656-01-48-212.w82-127.abo.wanadoo.fr. [82.127.83.212]) by mx.google.com with ESMTPSA id q6sm2494891wjy.47.2014.08.26.04.08.02 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 26 Aug 2014 04:08:03 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, preeti@linux.vnet.ibm.com, linux@arm.linux.org.uk, linux-arm-kernel@lists.infradead.org Cc: riel@redhat.com, Morten.Rasmussen@arm.com, efault@gmx.de, nicolas.pitre@linaro.org, linaro-kernel@lists.linaro.org, daniel.lezcano@linaro.org, dietmar.eggemann@arm.com, Vincent Guittot Subject: [PATCH v5 11/12] sched: replace capacity_factor by utilization Date: Tue, 26 Aug 2014 13:06:54 +0200 Message-Id: <1409051215-16788-12-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1409051215-16788-1-git-send-email-vincent.guittot@linaro.org> References: <1409051215-16788-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: vincent.guittot@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.179 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , The scheduler tries to compute how many tasks a group of CPUs can handle by assuming that a task's load is SCHED_LOAD_SCALE and a CPU capacity is SCHED_CAPACITY_SCALE. We can now have a better idea of the capacity of a group of CPUs and of the utilization of this group thanks to the rework of group_capacity_orig and the group_utilization. We can now deduct how many capacity is still available. Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 121 ++++++++++++++++++++++------------------------------ 1 file changed, 51 insertions(+), 70 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2f95d1c..80bd64e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5673,13 +5673,13 @@ struct sg_lb_stats { unsigned long sum_weighted_load; /* Weighted load of group's tasks */ unsigned long load_per_task; unsigned long group_capacity; + unsigned long group_capacity_orig; unsigned long group_utilization; /* Total utilization of the group */ unsigned int sum_nr_running; /* Nr tasks running in the group */ - unsigned int group_capacity_factor; unsigned int idle_cpus; unsigned int group_weight; enum group_type group_type; - int group_has_free_capacity; + int group_out_of_capacity; #ifdef CONFIG_NUMA_BALANCING unsigned int nr_numa_running; unsigned int nr_preferred_running; @@ -5901,31 +5901,6 @@ void update_group_capacity(struct sched_domain *sd, int cpu) } /* - * Try and fix up capacity for tiny siblings, this is needed when - * things like SD_ASYM_PACKING need f_b_g to select another sibling - * which on its own isn't powerful enough. - * - * See update_sd_pick_busiest() and check_asym_packing(). - */ -static inline int -fix_small_capacity(struct sched_domain *sd, struct sched_group *group) -{ - /* - * Only siblings can have significantly less than SCHED_CAPACITY_SCALE - */ - if (!(sd->flags & SD_SHARE_CPUCAPACITY)) - return 0; - - /* - * If ~90% of the cpu_capacity is still there, we're good. - */ - if (group->sgc->capacity * 32 > group->sgc->capacity_orig * 29) - return 1; - - return 0; -} - -/* * Group imbalance indicates (and tries to solve) the problem where balancing * groups is inadequate due to tsk_cpus_allowed() constraints. * @@ -5959,38 +5934,37 @@ static inline int sg_imbalanced(struct sched_group *group) return group->sgc->imbalance; } -/* - * Compute the group capacity factor. - * - * Avoid the issue where N*frac(smt_capacity) >= 1 creates 'phantom' cores by - * first dividing out the smt factor and computing the actual number of cores - * and limit unit capacity with that. - */ -static inline int sg_capacity_factor(struct lb_env *env, struct sched_group *group) +static inline int group_has_free_capacity(struct sg_lb_stats *sgs, + struct lb_env *env) { - unsigned int capacity_factor, smt, cpus; - unsigned int capacity, capacity_orig; + if ((sgs->group_capacity_orig * 100) > + (sgs->group_utilization * env->sd->imbalance_pct)) + return 1; + + if (sgs->sum_nr_running < sgs->group_weight) + return 1; - capacity = group->sgc->capacity; - capacity_orig = group->sgc->capacity_orig; - cpus = group->group_weight; + return 0; +} - /* smt := ceil(cpus / capacity), assumes: 1 < smt_capacity < 2 */ - smt = DIV_ROUND_UP(SCHED_CAPACITY_SCALE * cpus, capacity_orig); - capacity_factor = cpus / smt; /* cores */ +static inline int group_is_overloaded(struct sg_lb_stats *sgs, + struct lb_env *env) +{ + if (sgs->sum_nr_running <= sgs->group_weight) + return 0; - capacity_factor = min_t(unsigned, - capacity_factor, DIV_ROUND_CLOSEST(capacity, SCHED_CAPACITY_SCALE)); - if (!capacity_factor) - capacity_factor = fix_small_capacity(env->sd, group); + if ((sgs->group_capacity_orig * 100) < + (sgs->group_utilization * env->sd->imbalance_pct)) + return 1; - return capacity_factor; + return 0; } static enum group_type -group_classify(struct sched_group *group, struct sg_lb_stats *sgs) +group_classify(struct sched_group *group, struct sg_lb_stats *sgs, + struct lb_env *env) { - if (sgs->sum_nr_running > sgs->group_capacity_factor) + if (group_is_overloaded(sgs, env)) return group_overloaded; if (sg_imbalanced(group)) @@ -6043,6 +6017,7 @@ static inline void update_sg_lb_stats(struct lb_env *env, sgs->idle_cpus++; } + sgs->group_capacity_orig = group->sgc->capacity_orig; /* Adjust by relative CPU capacity of the group */ sgs->group_capacity = group->sgc->capacity; sgs->avg_load = (sgs->group_load*SCHED_CAPACITY_SCALE) / sgs->group_capacity; @@ -6051,11 +6026,10 @@ static inline void update_sg_lb_stats(struct lb_env *env, sgs->load_per_task = sgs->sum_weighted_load / sgs->sum_nr_running; sgs->group_weight = group->group_weight; - sgs->group_capacity_factor = sg_capacity_factor(env, group); - sgs->group_type = group_classify(group, sgs); - if (sgs->group_capacity_factor > sgs->sum_nr_running) - sgs->group_has_free_capacity = 1; + sgs->group_type = group_classify(group, sgs, env); + + sgs->group_out_of_capacity = group_is_overloaded(sgs, env); } /** @@ -6185,17 +6159,21 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd /* * In case the child domain prefers tasks go to siblings - * first, lower the sg capacity factor to one so that we'll try + * first, lower the sg capacity to one so that we'll try * and move all the excess tasks away. We lower the capacity * of a group only if the local group has the capacity to fit - * these excess tasks, i.e. nr_running < group_capacity_factor. The + * these excess tasks, i.e. group_capacity > 0. The * extra check prevents the case where you always pull from the * heaviest group when it is already under-utilized (possible * with a large weight task outweighs the tasks on the system). */ if (prefer_sibling && sds->local && - sds->local_stat.group_has_free_capacity) - sgs->group_capacity_factor = min(sgs->group_capacity_factor, 1U); + group_has_free_capacity(&sds->local_stat, env)) { + if (sgs->sum_nr_running > 1) + sgs->group_out_of_capacity = 1; + sgs->group_capacity = min(sgs->group_capacity, + SCHED_CAPACITY_SCALE); + } if (update_sd_pick_busiest(env, sds, sg, sgs)) { sds->busiest = sg; @@ -6373,11 +6351,12 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s */ if (busiest->group_type == group_overloaded && local->group_type == group_overloaded) { - load_above_capacity = - (busiest->sum_nr_running - busiest->group_capacity_factor); - - load_above_capacity *= (SCHED_LOAD_SCALE * SCHED_CAPACITY_SCALE); - load_above_capacity /= busiest->group_capacity; + load_above_capacity = busiest->sum_nr_running * + SCHED_LOAD_SCALE; + if (load_above_capacity > busiest->group_capacity) + load_above_capacity -= busiest->group_capacity; + else + load_above_capacity = ~0UL; } /* @@ -6440,6 +6419,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env) local = &sds.local_stat; busiest = &sds.busiest_stat; + /* ASYM feature bypasses nice load balance check */ if ((env->idle == CPU_IDLE || env->idle == CPU_NEWLY_IDLE) && check_asym_packing(env, &sds)) return sds.busiest; @@ -6460,8 +6440,9 @@ static struct sched_group *find_busiest_group(struct lb_env *env) goto force_balance; /* SD_BALANCE_NEWIDLE trumps SMP nice when underutilized */ - if (env->idle == CPU_NEWLY_IDLE && local->group_has_free_capacity && - !busiest->group_has_free_capacity) + if (env->idle == CPU_NEWLY_IDLE && + group_has_free_capacity(local, env) && + busiest->group_out_of_capacity) goto force_balance; /* @@ -6519,7 +6500,7 @@ static struct rq *find_busiest_queue(struct lb_env *env, int i; for_each_cpu_and(i, sched_group_cpus(group), env->cpus) { - unsigned long capacity, capacity_factor, wl; + unsigned long capacity, wl; enum fbq_type rt; rq = cpu_rq(i); @@ -6548,9 +6529,6 @@ static struct rq *find_busiest_queue(struct lb_env *env, continue; capacity = capacity_of(i); - capacity_factor = DIV_ROUND_CLOSEST(capacity, SCHED_CAPACITY_SCALE); - if (!capacity_factor) - capacity_factor = fix_small_capacity(env->sd, group); wl = weighted_cpuload(i); @@ -6558,7 +6536,10 @@ static struct rq *find_busiest_queue(struct lb_env *env, * When comparing with imbalance, use weighted_cpuload() * which is not scaled with the cpu capacity. */ - if (capacity_factor && rq->nr_running == 1 && wl > env->imbalance) + + if (rq->nr_running == 1 && wl > env->imbalance && + ((capacity * env->sd->imbalance_pct) >= + (rq->cpu_capacity_orig * 100))) continue; /*