From patchwork Fri Oct 18 13:26:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 176834 Delivered-To: patch@linaro.org Received: by 2002:a92:7e96:0:0:0:0:0 with SMTP id q22csp861441ill; Fri, 18 Oct 2019 06:26:59 -0700 (PDT) X-Google-Smtp-Source: APXvYqxDUDWNyrRMLakaPH1rWE6SS6Td2H8DrzvJ4ZEEEeEna8SAB7/WsT8YGUWuacG81OwTxcwM X-Received: by 2002:a17:906:cc90:: with SMTP id oq16mr8482650ejb.322.1571405219746; Fri, 18 Oct 2019 06:26:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1571405219; cv=none; d=google.com; s=arc-20160816; b=ZqRStqgOofQIn1/Tnoyi7x10J4Jzzd/wFxcxZKkleZuxmbiCi+jt5YD2PQa/EJiCyz 1NHpiYN3oj1Tfi7DKE+0yTBTcqM1vthbvjTSEXol5IDSvRu3n3ukhghcslzeMen0y+kn w3aASxKacVtM537hoShsHt3upETxkW9AbeO7+mTjBDQJhJy+Y9ggZuG1zt/FvQcxaZa4 FiR6+fH7rK+PFrLj0KNSmnem7s3EQQuglGSSTQNs9FB+rozkAsdNJynw+HScvP81VkD6 hBnZrFRTBcfr7qiiFLT2ukqqmvOtOUQ2UC8M85tXu7c5jy1NJe2lpx28XV95lJiKKfNZ feJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=UTQ1SciiKePYzOmVVy3DyjwjaDHHkwpywXBi+0rOpxQ=; b=MYEKdIcRi+2CG6Et3g9QFc6aofqXn9c5YXxDNCJtkz7VSneffNqtr27mYTLpvh/UOE 3DBQ3H6FFNogH1rcxiwwbqc3T2ZgnerTl1LfKmgBQWkLBMc4MRiIuNde/WZg3OSKEjzW /lrAEhtlx30/M/f9o3HC8aeOgcF1+VJJDxWV+C4XVlRhZrA5qOJl/8rBXZcy5dW8rELp fXGujShlpOdsPMkTj9f8DYBXByL4w8oNLJuid1Ml+1SUgG5PaMe6Tn4DUYtjdKQg3sHn R98iA/L26tCFoXDchjLH6LAfKq4DCyQ+XDunOzxsIoxfloBbmHmI9cEdOTs3PNtdp+uS s3Yw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=QXR90jeK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q4si3371475eji.152.2019.10.18.06.26.59; Fri, 18 Oct 2019 06:26:59 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=QXR90jeK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2410322AbfJRN06 (ORCPT + 26 others); Fri, 18 Oct 2019 09:26:58 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:40142 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2410292AbfJRN0w (ORCPT ); Fri, 18 Oct 2019 09:26:52 -0400 Received: by mail-wr1-f67.google.com with SMTP id o28so6271773wro.7 for ; Fri, 18 Oct 2019 06:26:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=UTQ1SciiKePYzOmVVy3DyjwjaDHHkwpywXBi+0rOpxQ=; b=QXR90jeKm05HUBMzUKPG93fbL6a4P5nHC1TJmpDv528sLkrBIwVZKNq/OQC/K458Un uxBioUPxoSm+VwhN1IfQqkhRs1DOxdr/UebBO3l/UFi9v7LXNo6GekREPyB1pb/Zh5Cv eFW37PbpACZ7F5kImbRvXv6cTZmJzDoAky6U6fldf2YL5q3PGATsrrl7vxJ5sc8zKpMd WIXzgJOUQKhM2ulfj7pLDF6EUyR7yUlqAixaeQlTGEJ0be3DN5tfe7KQpMppOlVIh+Aa NH8L4R9xNW/tlOQe4kWK0g9AIOdRmYuMp2i501s9eVH9ygXAHpFesaH0yzlhQ2hhZeeC bgyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=UTQ1SciiKePYzOmVVy3DyjwjaDHHkwpywXBi+0rOpxQ=; b=sbO7TkMGJ7OrIhvKF6K29sdZrhV7ql0IsaOLajX0vAAPQl+vzNo0Vt9EH5Qw4ze5iW Cw/ojyLpaZDi6/NB0qdRwehM0nlBigMdrq2Q0tZvDqGm3i9X24C8zKAx+UM8o0XeHhE0 hMy35GcSR85h7ogopZHNAvXkB0V1V8oQDpylhsDfMJEc4f2W4xkwhBUkD7GDXGUth2ED D0nZ86pd7B4nRXQd9ElobbJC8qn7qLDQD885VuL+KNIq41o1wPZN3uqikJLu7q7YNk8F bnreuN1An0jWlLtGNlzpVzE5MoFC/ed3rARbw8HbvKZ8FDkk9LcoahdrbKjlEIT/NUC8 aWlg== X-Gm-Message-State: APjAAAUrP9wx+/l4e5+UektKAadOONhDSfax5YZBaSht5hzYfsi2hGML d87iU1L0ZwsYZJ7oEs363cbUuR3x9fE= X-Received: by 2002:adf:de85:: with SMTP id w5mr7613678wrl.278.1571405209399; Fri, 18 Oct 2019 06:26:49 -0700 (PDT) Received: from localhost.localdomain (91-160-61-128.subs.proxad.net. [91.160.61.128]) by smtp.gmail.com with ESMTPSA id p15sm5870123wrs.94.2019.10.18.06.26.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 18 Oct 2019 06:26:48 -0700 (PDT) From: Vincent Guittot To: linux-kernel@vger.kernel.org, mingo@redhat.com, peterz@infradead.org Cc: pauld@redhat.com, valentin.schneider@arm.com, srikar@linux.vnet.ibm.com, quentin.perret@arm.com, dietmar.eggemann@arm.com, Morten.Rasmussen@arm.com, hdanton@sina.com, parth@linux.ibm.com, riel@surriel.com, Vincent Guittot Subject: [PATCH v4 03/11] sched/fair: remove meaningless imbalance calculation Date: Fri, 18 Oct 2019 15:26:30 +0200 Message-Id: <1571405198-27570-4-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1571405198-27570-1-git-send-email-vincent.guittot@linaro.org> References: <1571405198-27570-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org clean up load_balance and remove meaningless calculation and fields before adding new algorithm. Signed-off-by: Vincent Guittot Acked-by: Rik van Riel --- kernel/sched/fair.c | 105 +--------------------------------------------------- 1 file changed, 1 insertion(+), 104 deletions(-) -- 2.7.4 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 9a2aceb..e004841 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5390,18 +5390,6 @@ static unsigned long capacity_of(int cpu) return cpu_rq(cpu)->cpu_capacity; } -static unsigned long cpu_avg_load_per_task(int cpu) -{ - struct rq *rq = cpu_rq(cpu); - unsigned long nr_running = READ_ONCE(rq->cfs.h_nr_running); - unsigned long load_avg = cpu_runnable_load(rq); - - if (nr_running) - return load_avg / nr_running; - - return 0; -} - static void record_wakee(struct task_struct *p) { /* @@ -7667,7 +7655,6 @@ static unsigned long task_h_load(struct task_struct *p) struct sg_lb_stats { unsigned long avg_load; /*Avg load across the CPUs of the group */ unsigned long group_load; /* Total load over the CPUs of the group */ - unsigned long load_per_task; unsigned long group_capacity; unsigned long group_util; /* Total utilization of the group */ unsigned int sum_h_nr_running; /* Nr of CFS tasks running in the group */ @@ -8049,9 +8036,6 @@ static inline void update_sg_lb_stats(struct lb_env *env, sgs->group_capacity = group->sgc->capacity; sgs->avg_load = (sgs->group_load*SCHED_CAPACITY_SCALE) / sgs->group_capacity; - if (sgs->sum_h_nr_running) - sgs->load_per_task = sgs->group_load / sgs->sum_h_nr_running; - sgs->group_weight = group->group_weight; sgs->group_no_capacity = group_is_overloaded(env, sgs); @@ -8282,76 +8266,6 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd } /** - * fix_small_imbalance - Calculate the minor imbalance that exists - * amongst the groups of a sched_domain, during - * load balancing. - * @env: The load balancing environment. - * @sds: Statistics of the sched_domain whose imbalance is to be calculated. - */ -static inline -void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds) -{ - unsigned long tmp, capa_now = 0, capa_move = 0; - unsigned int imbn = 2; - unsigned long scaled_busy_load_per_task; - struct sg_lb_stats *local, *busiest; - - local = &sds->local_stat; - busiest = &sds->busiest_stat; - - if (!local->sum_h_nr_running) - local->load_per_task = cpu_avg_load_per_task(env->dst_cpu); - else if (busiest->load_per_task > local->load_per_task) - imbn = 1; - - scaled_busy_load_per_task = - (busiest->load_per_task * SCHED_CAPACITY_SCALE) / - busiest->group_capacity; - - if (busiest->avg_load + scaled_busy_load_per_task >= - local->avg_load + (scaled_busy_load_per_task * imbn)) { - env->imbalance = busiest->load_per_task; - return; - } - - /* - * OK, we don't have enough imbalance to justify moving tasks, - * however we may be able to increase total CPU capacity used by - * moving them. - */ - - capa_now += busiest->group_capacity * - min(busiest->load_per_task, busiest->avg_load); - capa_now += local->group_capacity * - min(local->load_per_task, local->avg_load); - capa_now /= SCHED_CAPACITY_SCALE; - - /* Amount of load we'd subtract */ - if (busiest->avg_load > scaled_busy_load_per_task) { - capa_move += busiest->group_capacity * - min(busiest->load_per_task, - busiest->avg_load - scaled_busy_load_per_task); - } - - /* Amount of load we'd add */ - if (busiest->avg_load * busiest->group_capacity < - busiest->load_per_task * SCHED_CAPACITY_SCALE) { - tmp = (busiest->avg_load * busiest->group_capacity) / - local->group_capacity; - } else { - tmp = (busiest->load_per_task * SCHED_CAPACITY_SCALE) / - local->group_capacity; - } - capa_move += local->group_capacity * - min(local->load_per_task, local->avg_load + tmp); - capa_move /= SCHED_CAPACITY_SCALE; - - /* Move if we gain throughput */ - if (capa_move > capa_now) - env->imbalance = busiest->load_per_task; -} - -/** * calculate_imbalance - Calculate the amount of imbalance present within the * groups of a given sched_domain during load balance. * @env: load balance environment @@ -8370,15 +8284,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s return; } - if (busiest->group_type == group_imbalanced) { - /* - * In the group_imb case we cannot rely on group-wide averages - * to ensure CPU-load equilibrium, look at wider averages. XXX - */ - busiest->load_per_task = - min(busiest->load_per_task, sds->avg_load); - } - /* * Avg load of busiest sg can be less and avg load of local sg can * be greater than avg load across all sgs of sd because avg load @@ -8389,7 +8294,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s (busiest->avg_load <= sds->avg_load || local->avg_load >= sds->avg_load)) { env->imbalance = 0; - return fix_small_imbalance(env, sds); + return; } /* @@ -8427,14 +8332,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s busiest->group_misfit_task_load); } - /* - * if *imbalance is less than the average load per runnable task - * there is no guarantee that any tasks will be moved so we'll have - * a think about bumping its value to force at least one task to be - * moved - */ - if (env->imbalance < busiest->load_per_task) - return fix_small_imbalance(env, sds); } /******* find_busiest_group() helpers end here *********************/